Column Check it out, man! Smartphone innovation has hit a wall. The new iPhone 15 just dropped, and yeah, it’s got some cool features. But honestly, my iPhone 13 is still perfectly fine and I don’t feel the need to upgrade right away. I mean, my last iPhone lasted me four years, bro.
Back in the day, it made sense to grab the latest iPhone every year. But nowadays, what are we really getting? The iPhone 15 brings USB-C, a better camera, and faster wireless charging. I mean, those things are nice, but do most users really need them? Probably not, dude.
But hold up, because smartphones are about to level up and it’s all thanks to the crazy advancements in AI, man.
You won’t believe it, but pretty much anyone with a smartphone can already access the “Big Three” AI chatbots – OpenAI’s ChatGPT, Microsoft’s Bing Chat, and Google’s Bard – through an app or web browser, bro.
Like, that’s already pretty cool. But here’s where things get really interesting, man. There’s this underground movement led by one of the big tech giants that’s making some serious noise, bro.
A few months back, Meta AI Labs dropped LLaMA – a large language model that’s scaled down in both training data and number of parameters. Now, the whole parameter thing is a bit tricky to fully grasp, but basically, more parameters usually means more power. Take GPT-4, for example, which is believed to have a trillion or more parameters, although OpenAI keeps a tight lid on the specifics.
Now, Meta’s LLaMA may only have a measly 70 billion parameters, and in some versions, just seven billion. But here’s the crazy part, man. Even though LLaMA has never officially beaten GPT-4 in any benchmark, it still holds its own, bro. Like, in many situations, it’s totally more than good enough.
And get this, dude. LLaMA is like open source-ish, kind of Meta-style. So a whole army of researchers can take the tools, the techniques, and the training and make them better, like, really fast. In just a few weeks, we’ve seen Alpaca, Vicuna, and a bunch of other large language models that have been tweaked to be even better than LLaMA. They’re getting closer and closer to GPT-4 in benchmarking, man.
Then, in July, Meta AI Labs dropped LLaMA2 under a less Meta-centric license. And let me tell you, thousands of AI coders got to work fine-tuning it for all sorts of uses, dude.
But here’s the kicker, bro. Just three weeks ago, Meta AI Labs nailed it again with Code LLaMA. This model is specifically designed to provide code completions and analysis within an IDE. And guess what? Within two days, this startup called Phind fine-tuned Code LLaMA into a massive language model that actually beats GPT-4 – at least in one benchmark, man.
That’s a major game-changer, bro. It’s a warning shot fired at OpenAI, Microsoft, and Google. These “tiny” large language models can