Until you find that person who wired a 48v USB plug, just for you.
Until you find that person who wired a 48v USB plug, just for you.
Cudos to the “My good gas mileage car can do this” crowd, though.
Have a camper minivan, and have on a number of occasions pulled the cushions out to haul.
Not sure it would be helpful from here, hah. Are you in the US and what state?
My favorite bakery is next to an X-Files museum.
Also, when not using repositories it is much more common to go to the source, like GitHub releases, etc.
Oooo healthy online discourse. Where’s my popcorn…
This post isn’t about email open rates, it’s about data exfiltration. But for email speficially, show me major providers that prefetch by default.
If by prefetch you mean the server grabs the images ahead of time vs the client, this does not happen, at least on amy major modern platform that I know of. They will cache once a client has opened, but unique URLs per recipient are how they track the open rates.
Server or client, every supposed prefetch would be unique. If I trick an LLM client into grabbing:
site.com/random-words-of-data/image.gif
Then:
site.com/more-random-data/image.gif
Those are two separate images to the cache engine. As the data refreshes, the URL changes, forcing a new grab each time.
For email, marketers do this by using a unique image URL for every recipient.
Is there any way to see older data?
Yea this confirms what I thought. He also allegedly imprisoned and sexually abused his sister for 45 years: https://www.msn.com/en-us/news/crime/new-york-woman-claims-she-was-sexual-slave-to-her-brother-for-45-years-lawsuit/ar-AA1lGqE7
Guessing we know which relatives he wanted investigated.
The indictment alleges Rosenwasser accepted bribes from Mout’z Soudani to investigate two of Soudani’s relatives. It further alleges that Rosenwasser would provide Soudani updates on the investigation inappropriately in exchange for bribes.
Woa, is that this Soudani? https://www.msn.com/en-us/news/crime/new-york-woman-claims-she-was-sexual-slave-to-her-brother-for-45-years-lawsuit/ar-AA1lGqE7
If so, we can guess the rest of the story.
But the path changes with every new data element. It’s never the same, so every “prefetch” is a whole new image in the system’s eyes.
Op is not using Linux, and they’re the first search results even so.
Bad title. They did not strike. They voted to OK a strike.
Most of these systems have FOSS Linux equivalents. Sooooo
This wouldn’t help, would it? How would you prefetch and cache:
site.com/base64u-to-niceware-word-array/image.gif
? It would look like a normal image URL in any article, but actually represent data.
Note: “niceware” is a way to convert binary or text data into a set of words like “cow-heart-running-something-etc”.
Sort of, but not really.
In basic terms, if an LLM’s training data has:
Bob is 21 years old.
Bob is 32 years old.
Then when it tries to predict the next word after “Bob is”, it would pick 21 or 32 assuming somehow the weights were perfectly equal between the two (weight being based on how many times it occurred in training data around other words).
If the user has memories turned on, it’s sort of like providing additional training data. So if in previous prompts you said:
I am Bob.
I am 43 years old.
The system will parse that and use it with a higher weight, sort of like custom training the model. This is not exactly how it works, because training is much more in-depth, it’s more of a layer on top of the training, but hopefully gives you an idea.
The catch is it’s still not reliable, as the other words in your prompt may still lead the LLM to predict a word from it’s original training data. Tuning the weights is not a one-size fits all endeavor. What works for:
How old am I?
May not work for:
What age is Bob?
For instance.
Do with the what now?