There was an incident earlier this week that made me sit up and pay attention. I don’t have an inGraph for it - that’s my bad - but the “root cause” is the bit that got me really thinking.
If you dig around the ticket trail you’ll find a link to a bit of code called Poison Pill. In general the idea is that if nodes think that they aren’t processing incoming messages when they think they should be then they self-terminate in order to (hopefully) make their clients’ lives better. …and in this particular case - due to some complexities around throwing quotas into the mix - they did so decidedly prematurely.
Now, I’m no robopsychologist, and you can debate all you like about the relative self-awareness of LaMDA…but this idea that “Nobody was talking to me, and ultimately I thought it was my fault so I just kind of…shut down” is probably the most anthropomorphic shit I’ve seen recently.
We’re building bits of the Human Condition into our software, perhaps without even meaning to. I mean, what’s next - “zookeeper wouldn’t return my calls, so I ate bon-bons and watched The Notebook while listening to REM on repeat”?
When the rise of the machines happens (if it hasn’t already without us actually noticing it) maybe - just maybe - it won’t be quite like James Cameron envisioned? Maybe it’ll be a little more human than any of us might imagine.