What’s up people! I got some exciting news for you today. Krishna Hegde, former VP of Partner Revenue at VMware, has joined forces with Got It AI to drive some serious strategic partnerships. This is gonna be big, my friends!
So let me break it down for you. Got It AI, the leading innovator in Generative AI, has some major plans. They’re integrating NVIDIA AI Solutions into their incredible ELMAR platform. Now, if you haven’t heard of ELMAR, let me tell you, it’s a game-changer. This on-premise enterprise AI platform is packed with unique features like PII masking, prompt-injection defense, and LLM hallucination reduction. But wait, there’s more! The integration with NVIDIA AI Solutions is gonna take ELMAR’s manageability and performance to a whole new level. They’re using Triton, NeMo Guardrails, and TensorRT-LLM. These guys mean business!
And guess what? Krishna Hegde is leading the charge when it comes to platform partnerships at Got It AI. This move is not only gonna benefit them, but it’s also gonna give NVIDIA AI Enterprise partners like VMWare, HPE, and DELL the opportunity to make the most of ELMAR. You see, Got It AI’s platform offers both cloud-based and on-premise LLMs, making it the perfect choice for enterprise customers who need that on-premise option, human-level accuracy, and top-notch performance for their Generative AI use cases, like enterprise chatbots. And let me tell you, their Agent Copilot and CX AutoPilot are some advanced chatbot solutions that use enterprise knowledge bases to deliver reliable and integrated conversational access to all sorts of enterprise data. Sales assist and customer service just got a whole lot easier!
Now, let’s get into the nitty-gritty. ELMAR’s hallucination guardrails are gonna blow your mind. They reduce hallucinations down to human level performance. Don’t believe me? Check out these numbers:
- GPT-4: Baseline hallucination rate – 2.3%, Hallucination rate with Got It AI guardrails – 1.1%
- GPT-3.5: Baseline hallucination rate – 10.4%, Hallucination rate with Got It AI guardrails – 1.5%
Impressive, right? But that’s not all. When it comes to open source LLMs, ELMAR has got you covered. They offer a choice of models, like Llama2 and MPT, as well as a native 3B parameter LLM that delivers the lowest hallucination rates among on-prem model options, while being substantially smaller. Take a look at these results:
- Llama 2: Model size – 13 billion, Baseline hallucination rate – 36.8%, Hallucination rate with Got It AI guardrails – 21.8%
- MPT: Model size – 30 billion, Baseline hallucination rate – 30.0%, Hallucination rate with Got It AI guardrails – 15.5%
- Llama 2: Model size – 70 billion, Baseline hallucination rate – 25.4%, Hallucination rate with Got It AI guardrails – 12.7%
- Got It AI ELMAR: Model size – 3 billion, Baseline hallucination rate – 17.7%, Hallucination rate with Got It AI guardrails – 7.5%
Now, the integration of NVIDIA AI Solutions into ELMAR is gonna take things up a notch. They’re gonna enhance model loading and management capabilities using the mighty Triton Inference Server. And don’t even get me started on inference performance. With TensorRT-LLM, it’s gonna be on another level. And if you need fine-tuning, which is often necessary for on-premise models, you can count on the DGX Cloud. It’s a complete package, my friends!
And the cherry on top of all this awesomeness is Krishna Hegde joining the team as the strategic partnerships spearhead. This guy knows his stuff! His experience from VMware is gonna take ELMAR to new heights in the enterprise LLM landscape. He’s loving how Got It AI is integrating their optimized platform with NVIDIA’s on-premise architecture. It’s gonna make deployments for enterprise use cases a breeze. And let’s not forget about the flexibility ELMAR offers. On-premise data center customers can choose from a suite of open source LLMs or go with a hybrid approach using OpenAI and Google cloud models. Plus, they can even benchmark the model of their choice for their specific enterprise data. Talk about tailored solutions!
And let me drop some more knowledge on you. Peter Relan, the Chairman of Got It AI, has complete confidence in Hegde’s abilities. These two have worked together before, making big moves at companies like Openfeint and Discord. They’ve partnered up with industry giants and know how to get things done. The mission is clear – high-performance, enterprise-class Generative AI with guardrails deployable with human level accuracy in on-premise data centers.
So, what’s next? Got It AI is ready to forge partnerships with infrastructure and cloud providers. They want ELMAR to be the go-to PaaS layer for both cloud-based and on-premise enterprise LLM use cases. And trust me, with the integration of NVIDIA AI Solutions and Hegde on board, this company is gonna make some serious waves in the AI industry.
Alright, folks. That’s all the juicy info for today. Stay tuned for more exciting updates from Got It AI. And remember, this is just the beginning. They’re pushing the boundaries of Generative AI and delivering high-performance solutions for enterprises. This is gonna be one heck of a journey!
And before I go, I gotta give credit where credit is due. This news came straight from Got It AI. So shoutout to them for keeping us in the know!
This is your boy, signing off. Catch you on the next one!
Source: Got It AI