Hyper-Local AI News

Introducing AI Playground — LLM Battleground to Test Powerful AI Models

11.3_blog_hero

This blog post focuses on new features and improvements. For a comprehensive list, including bug fixes, please see the release notes.

AI Playground

We’re excited to introduce the new AI Playground, your interactive llm battleground to explore, test, and build with powerful AI models. Whether you’re a developer evaluating model performance, a product manager prototyping features, or just curious about what a model can do, the Playground gives you immediate hands-on access without any setup.

The Playground brings together a curated collection of trending models across vision, language, and multimodal tasks. Some of the models you can test include DeepSeek-R1-Distill-Qwen-7B, Llama 3.2, Qwen 2.5-VL, Gemma 3.0, and MiniCPM. You can input your own data, get real-time outputs, compare results, and understand how models behave — all from one clean, intuitive interface. We now support streaming, so you can interact with language models as they generate outputs token by token, just like in a chat.

To make the experience even more developer-friendly, every model in the Playground includes a ready-to-use API code snippet. With a single click, you can copy the code and start integrating that model directly into your own AI apps. No need to dig through documentation or figure out the right configuration. It’s available instantly.

You can also deploy community models directly from the Playground to your own dedicated compute. This makes it easy to move from experimentation to production without leaving the interface.

We built the AI Playground to make it easier to go from testing a model to deploying it. You no longer need to switch tools, write extra code, or set up infrastructure just to see how a model works. This simplifies the process from exploration to deployment.

Screenshot 2025-04-08 at 2.27.00 PM

Labeling Tasks

  • We’ve upgraded the Labeling Tasks UI to provide a smoother and more consistent experience. This update brings the task labeling interface in line with our unified Input Viewer component, ensuring consistency across the platform and making it easier for users to navigate and label with confidence.

Screenshot 2025-03-11 at 1.19.46 PM

Platform Updates

Redesigned Homepage

  • We introduced a redesigned homepage to help you get started with the Clarifai platform more quickly

  • You can now easily discover trending AI models, test them in the playground, set up your computing infrastructure, browse your apps, and more — all in one centralized place.

  • You can also customize your homepage layout using the “Configure Home” button in the upper-right corner.

Screenshot 2025-04-08 at 4.38.36 PM

Made platform improvements

  • Introduced a new Infrastructure Manager role, allowing users to create, modify, and delete clusters and nodepools within an organization.
  • Removed the automatic app onboarding flow for new users upon signup. Apps are no longer required for certain actions, such as making deployments with Compute Orchestration.
  • We now automatically fetch a user’s full name after signing up with SSO via Google or GitHub, eliminating the need to manually enter it during the signup flow.
  • We improved the “Member Since” column in the organization members list table. It now displays when a member joined the organization, rather than when they assumed their current role.
  • We restricted access to the organization members list to admins only.

We’ve made a lot of updates to the Control Center to make it more intuitive and easier to navigate.

  • Refined table layouts for improved clarity and readability.
  • Introduced a new multi-column tooltip for charts, enhancing data visibility and readability.
  • Fixed a color inconsistency where the tooltip card displayed an incorrect color when hovering over chart elements. The tooltip now accurately reflects the corresponding chart segment color.
  • Fixed an issue where the usage dashboard failed to correctly report costs for deleted models.

Additional changes – Python SDK

We’ve made several enhancements to the Python SDK to improve usability, and developer experience:

  • Removed HF loader config.json validation for all Clarifai model type IDs.
  • Added regex patterns to filter checkpoint files for download.
  • Implemented validation for CLI configuration.
  • Fixed Docker image name and introduced the skip_dockerfile option in the test-locally subcommand of the Model CLI.

Learn more here about the latest updates.

Ready to start building?

Jump into the Playground to test the trending AI models and explore their capabilities in real time. When you’re ready to scale, set up your own dedicated compute and deploy models directly to your preferred environment. If you have any questions, send us a message on our Community Discord channel. Thanks for reading!

Check out the tutorial below for a glimpse of the new homepage and to see how you can explore models, create compute clusters, and deploy your own models.


#Introducing #Playground #LLM #Battleground #Test #Powerful #Models

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblocker Detected

Please Turn off Ad blocker