Dynatrace Expands Platform to Include End-to-End Observability for Large Language Models and Generative AI-Powered Apps
Dynatrace, the leader in AI-powered observability, has announced the expansion of its platform to include end-to-end observability for Large Language Models (LLMs) and Generative AI-powered applications. This move aims to offer organisations greater visibility and control when adopting these technologies, ensuring security and compliance, preventing AI “hallucinations,” and accurately forecasting and managing costs.
AI Observability Provides Holistic Observability and Security for LLMs and Generative AI-Powered Applications
The enhancement to Dynatrace’s analytics and automation platform, known as AI Observability, is designed to provide holistic observability and security for LLMs and generative AI-powered applications. This development empowers organizations worldwide to confidently and cost-effectively integrate generative AI into their roadmap, fostering innovation, productivity, and revenue growth.
Dynatrace AI Observability Covers the Full AI Stack
Dynatrace AI Observability covers the full AI stack, including infrastructure, foundational models, semantic caches, vector databases, and orchestration frameworks. It also extends support to major platforms used for building, training, and delivering AI models, such as Microsoft Azure OpenAI Service, Amazon SageMaker, and Google AI Platform.
Leveraging Key Technologies, Dynatrace Provides a Precise and Complete View of AI-Powered Applications
By leveraging technologies like the platform’s Davis AI, Dynatrace AI Observability delivers a precise and complete view of AI-powered applications. This capability helps organizations identify performance bottlenecks and root causes, leading to improved user experiences, compliance with privacy and security regulations, and governance standards. Additionally, it aids in forecasting and controlling costs associated with the consumption of tokens used by generative AI models.
Growing Interest in Generative AI Signals the Need for Reliable and Cost-Effective Solutions
As Gartner predicts that more than 50% of cloud computing resources will be allocated to AI workloads by 2028, there is a growing interest among organizations in generative AI to drive automation and innovation. However, concerns about the significant costs associated with generative AI-powered services and the need for responsible and ethical AI use have also been highlighted.
Dynatrace Extends Its Observability and AI Leadership to Help Organizations Confidently Embrace AI
Bernd Greifeneder, CTO at Dynatrace, emphasizes the importance of AI observability in overcoming the challenges posed by generative AI. He states that Dynatrace is extending its observability and AI leadership to meet this need, helping customers embrace AI confidently and securely while gaining unparalleled insights into their generative AI-driven applications.
In partnership with Microsoft, Dynatrace’s AI Observability has been integrated with Azure Open AI Service, empowering organizations to create generative AI-based services with confidence and reliability.
Analyst comment
Positive news: Dynatrace expands its platform to include end-to-end observability for Large Language Models (LLMs) and Generative AI-powered apps. This will offer organizations greater visibility and control, ensuring security, compliance, preventing AI “hallucinations,” and managing costs. As organizations show growing interest in generative AI, Dynatrace’s solution provides reliable and cost-effective solutions, allowing them to confidently embrace AI. With the integration of Azure Open AI Service, customers can create generative AI-based services with confidence and reliability. The market for observability and AI-powered apps is expected to grow as more organizations adopt these technologies.