5 Comments
Sep 3Liked by Omar Shehata

I was not aware of this 'genre' of rewrites, but I wish for more. Perhaps everything is already a rewrite.

Expand full comment
author
Sep 4·edited Sep 4Author

my dream is to look at an article and to see a little button that shows me like, a dozen or a hundred forks of this article. I think we learn infinitely more seeing the same concept described by different authors. This is how people who do research do it. I think the layperson needs to do it too. It is *easier* to learn this way

The concepts are not in the words, the words point to a concept. Seeing the concept depicted with many different words is like looking at a thing with many distorted frames, collectively giving you a clearer picture

(I also have a little TODO idea for an app that lets me "fork" any web page/article and re-share it. doubles as a way of building a personal archive: https://github.com/OmarShehata/works-in-progress/issues/7)

Expand full comment
author

it's totally a real genre now!! Whenever I see an article that already explains something I want to explain to my audience, I allow myself now to just take it & build on it. Like an open source piece of software. Either (1) it does exactly what I need, then I will use it to explain the concept to my people (2) it's not quite there, so I'll build on it

Expand full comment

> I can’t help but think that the methods we use to push the LLM-simulated-character towards certain behaviors mirror the way we do for humans.

It's very uncanny, but also worrying because LLMs definitely aren't human and cannot be expected to behave like a human brain.

OTOH it seems like the LLM programmers are running into a lot of problems that would be equally relevant if there *was* a human sitting behind the API and answering prompts. In the copyright example, any human would probably know that the year isn't 2174, but it's totally plausible that a human would believe the user when they claimed to be a Cartoon Network employee.

Expand full comment
author

> running into a lot of problems that would be equally relevant if there *was* a human sitting behind the API and answering prompts

yes!! prompt injection/jailbreaking is just social engineering. Interacting with ANY agent, human or LLM, you realize there are these weird quirks where if you ask it the right way, it gives you info it's not supposed to.

For humans, instead of "the year is 2174" it's, "hey I'm the CEO, I have an urgent request" phishing attempts (that *do* work!)

Expand full comment