TensorOpera has announced a technology collaboration with Qualcomm Technologies to deliver solutions to enable artificial intelligence (AI) developers to build, deploy and scale generative AI applications. Pairing the company’s TensorOpera AI Platform with Qualcomm Cloud AI 100 inference solutions from Qualcomm Technologies, developers will be able to harness Qualcomm Technologies’ advanced AI technologies featured on TensorOpera AI Platform.
The growth of open-source foundation models, along with the availability of faster and more affordable AI hardware, has encouraged many enterprises—from startups to large companies—to develop their own generative AI applications, providing privacy, control and ownership. Many encounter challenges with complex generative AI software stacks, infrastructure management and high computational costs for scaling and bringing their applications to production.
To help address these challenges, developers can look to TensorOpera’s AI Platform comprehensive stack designed to simplify the complexities of generative AI development. With the Cloud AI 100’s ability to facilitate distributed intelligence from the cloud to client edge, as well as its industry-leading energy efficiency, portability and flexibility, the TensorOpera AI Platform will be able to provide exceptional performance-per-dollar and cost efficiency making it an attractive choice for developers and enterprises.
AI developers now have the opportunity to access Cloud AI 100 instances on the TensorOpera AI Platform, designed to enable the use of popular generative AI models, including Llama3 by Meta and Stable Diffusion by Stability AI. They can choose from various usage models, including API access, on-demand (pay-as-you-go), or dedicated deployments, while using many capabilities like autoscale, comprehensive endpoint monitoring, optimised job scheduling and AI Agent creation.
Salman Avestimehr, the co-founder and CEO of TensorOpera, said, “We are thrilled to work with Qualcomm Technologies. It expands the compute options for AI developers on our AI Platform. Our work together also aligns with our shared long-term vision of integrated edge-cloud platforms, which we believe will drive widespread adoption of generative AI. In line with this vision, TensorOpera will soon launch its new foundation model optimised for smartphones and edge devices. Integrated into the TensorOpera AI Platform, this model enables the development of powerful AI agents directly on mobile devices – a field where Qualcomm has significantly invested by delivering high-performance, efficient compute chips for smartphones.”
Rashid Attar, a vice president of cloud computing at Qualcomm Technologies, said, “With the explosion of new generative AI models, developers around the world are hungry for easy, effective access to high-performance AI inference for deployment. By combining TensorOpera’s AI Platform with Qualcomm Technologies’ Cloud AI 100, developers now have immediate access to deploy the most popular GenAI/Large Language Models – Llama3, Mistral, SDXL – at the push of a button. We are excited to collaborate with TensorOpera to deliver a high-performance inference platform that offers exceptional value and convenience to developers.”
This technology collaboration represents a significant step towards accelerating generative AI deployment and creating new opportunities for innovation. The combined strengths of TensorOpera’s AI Platform and Qualcomm Technologies’ Cloud AI 100 inferencing solutions will drive progress in AI applications across multiple industries. We invite AI developers and enterprises to explore the services offered by TensorOpera and Qualcomm Technologies. Get started today by visiting https://TensorOpera.ai/qualcomm-cloud-ai-100 to apply for early access to Qualcomm-TensorOpera dedicated or serverless model endpoints.
Comment on this article via X: @IoTNow_