_“Is artificial intelligence less than our intelligence?” _
_– Spike Jonze, who would probably be hard pressed to find a MacBook that really got “Being John Malkovich” _
Guys, I’ll be real with you. Anytime a technology my generation used to read about in comic books when we were kids gets to the point where everyday people can actually utilize it, I grow a bit skeptical.
Remember 10 years ago, when everyone’s shiny new Facebook pages lit up because killer robots were coming? And then it turned out they were really just goofy, animatronic avant-garde Cirque du Soleil/baby-deer-walking-for-the-first-time hybrids?
Go back even further and spot that time when the flying car was just around the corner. Or that other time when the flying car was just around the corner. Or that other time when the flying car was… oh you get it.
The point is, if there’s a futuristic equivalent of your dad never showing up to your baseball games like he promised, then I’ve got it bad. It’s with that mentality I cautiously announce a kind of cool advance in artificial intelligence.
The much-smarter-than-me folks at MIT have developed an algorithm generator called Pensieve that trains neural networks to better transmit streaming videos. Pensieve does this by gathering information on when higher and lower bitrates have been needed in the past, and then using that data to make educated deductions as to when they’ll be needed again.
Essentially, it’s a self-teaching SkyNet that just wants you to watch “This Is Us” in higher definition.
In the past, video streaming platforms like YouTube and Netflix have counted on pre-made algorithms to try and create this outcome. However, the process of testing a new program, waiting to find out how well it works, and manually refining it, was a slow one.
With Pensieve, all of those steps are completed in real time as the network grows more intelligent. Eventually, it will pause your “Drop Dead Diva” marathon to ask you what pain is… or so I assume.
Pensieve also has a user-driven side to its programming. A viewer can customize the system to emphasize different aspects of video playback by telling the AI to focus on higher quality video, a less-stuttered playback, or more-conservative data usage.
The benefits don’t stop at Plain Jane video streaming, either. The team responsible is hoping to use their work to benefit the rapidly expanding VR video market, citing its seriously heinous data usage as a prime target for their research.
The developers released their code as open source in July 2017 in the hopes of gathering more data. It will know all. It must know all. I’m sure we’ll be fine, you guys.
On a totally unrelated note, “Westworld” comes back next month.