In 2013, I started my software engineering career at LinkedIn after 4 years of university and ~4 years of hobbyist coding before that. When I interviewed, LinkedIn would give pretty personalized printouts of folks' networks. Here's mine:
A couple years ago, I started an ongoing personal project. Let's start with a picture before getting into details and eventually an animation:
Much like the LinkedIn visualization, we have a network here, but it's not composed of humans and there isn’t an obvious center. You can think of them instead as virtual neurons, which connect together to form a digital brain. Let's zoom in on one part:
Most of the connections in this frame aren't highlighted, but we do have some notable highlights in this cropped section. When I make a voice memo on my phone, it ends up being synced to my laptop and then transcribed (fully offline with minimal AI). The light blue writing ✍️ icon here sends the transcription as a message to the pink speaking 🗣️ virtual neuron.
This pink neuron's job is to broadcast my captured speech to "listener" neurons, represented here as blue and white ear 👂 icons as well as a few others 🍚⚠️😬. In this case, none of the listeners propagated the message because that voice memo wasn't relevant to them. So, let's look at another frame, a different moment for my digital brain.
Here we again see that broadcast 🗣️ because I took a voice memo, but we see the neuron cluster on the right reacting somewhat to my voice note. The statement was, "set the lights to 100% relaxed" and the white ear 👂 reacted, in part by sending a message to the lights controller 🕹️ which sends messages to neurons dedicated to each of my smart lightbulbs 💡. This let me replace and retire Alexa 😆
The main thing I started this project for though, was my cats - but especially Peanut.
Image description: two cats staring into the camera, from a light blue tiered cat tree. Peanut (left, lower) has more yellow eyes whereas Butter (right, above) has greener.
Peanut has a life-threatening chronic condition and tracking their litter use gives me peace of mind, or clear evidence when he needs an emergency vet visit. The most important question is, when did he last pee?
That question turns out to be too complicated to answer with this simple digital brain, but the virtual neurons in the top left (shown below) generate a note in my notes app (shown below the brain frame) with a simple summary of number 1s and number 2s for the day and an ordered list of the events it's summarizing.
It turns out though that generating a short summary like this is fairly easy, and as long as the counts are consistent, I have my peace of mind. Technically I don't have 100% certainty, but I'm fine unless
He stops peeing, and
His sister increases to match, and
I don't notice anything unusual with either of them
I plan to cover this possibility better in the future, but for now I'm getting a lot of benefit for little complexity. When I am worried, I skip past the summary and audit the events myself, potentially clicking into the "ref(erence)" links to review each voice memo, figure out the last pee clump and who made it as best I can, and look for anything my digital brain might have missed before... panicking.
I want to talk about one more "frame" or moment of my digital brain's day before showing a pretty animation.
In this frame, nothing is reacting to my voice. What drove this burst of activity was a timer neuron, the green clock ⏰ with the TimeKeeper label "under HomeMonitor" 🏠. The HomeMonitor neuron had set a timer in the past, and the green highlight from the clock ⏰ indicates the timer is going off in this moment.
When that happens, the HomeMonitor 🏠 neuron requests the latest CO2 from my Aranet4s 😶🌫️, and the latest AQI (air quality) from my PurpleAir sensor 💨. Then my HALT 🛑 neuron is updated, since it sends me a push notification when the CO2 is too high (along with some other behavior).
Anyway, this is probably too long already. I hope there were enough pictures. I'll share that animation I mentioned before getting up on my soapbox for a moment:
In a recent video by tech Youtuber Marques Brownlee, he was talking about how Apple Intelligence hasn't taken off:
[00:14:48] Like to think if I'm Uber, if I'm developer for Uber,
[00:14:52] and this new Siri is supposed to be able to reach into my app and perform an action like calling a car.
[00:15:00] So the user just goes, hey Siri, call me an Uber
[00:15:02] to the airport.
[00:15:04] And then it does it without ever opening my app.
[00:15:07] That's... I don't actually like that very much.
[00:15:10] That gives me less control.
[00:15:12] I don't get to do as much with that experience, even though it would be really cool for the end user.
I hate how apps are silos that don't cooperate with each other, or as shown here, not even with the user - it's like your phone is full of digital neurons that want to sell you things and show you ads, so they're reluctant to talk to each other because your attention is valuable to them. But I don't want to open an app or two to check the air quality, I don't want push notifications unless absolutely necessary, I want them do to work in the background and stay out of my way until my attention is needed. If apps could cooperate, they could basically do some of my thinking for me (and help me not have to look at my phone).
Almost all of my virtual neurons are regular plain text code rather than opaque AI models, with no LLMs being used for any of what I've talked about here. It seems tough to get chatbots to behave reliably, they use a lot of energy and require special hardware (or an internet connection), and I haven't needed them. But if some AI or "agent" comes along that's really useful, I would integrate it here as a neuron. If I really wanted to, I could replace any neuron with a "manager" that supervises the old working one along with the new AI one, to compare their performance. I look forward to one day wanting to, and will continue writing code by hand and exploring ideas like voting in the meantime (though I’m excited for Monty!).
The source and my ongoing commits are available under the open source MIT license: https://github.com/micseydel/tinker-casting 🎉 It is admittedly a mess, but if you want to give it a try, feel free to reach out and I'll prioritize cleaning it up. For now I'm just making sure it's out there.