Pure Neo Logo
FeaturesCoursesPricingFAQs
Link Four
Link FiveLink SixLink Seven
LoginBook course
AI News Timeline
Alibaba

Alibaba

Subscribe now

Alibaba Shrinks Its Coding AI to Run Locally

Qwen3-Coder-Flash is a compact, 30B MoE model capable of local inference on modern MacBooks. It joins Alibaba’s broader Qwen3 ecosystem as a nimble counterpart to the heavyweight 480B hosted version, giving developers a pragmatic hybrid setup for coding workflows.
July 31, 2025

Alibaba Drops Qwen3‑30B Instruct Models With Local Deployment in Mind

Alibaba has released two new open-source MoE models under its Qwen3 series, including an FP8 quantized variant. The 30B parameter architecture activates only 3B parameters per forward pass, enabling high performance with reduced hardware demand. With 128K context support and strong benchmarks in reasoning and code, these models are primed for local use cases on Apple Silicon Macs and beyond.
July 29, 2025

Wan 2.2 Debuts as Fully Open-Source, Multimodal Video Model

Text-to-video, image-to-video, and hybrid input modes now run locally at 720p and 24 fps. The Alibaba Wan team has open-sourced Wan 2.2 under Apache 2.0 and integrated it into ComfyUI and Hugging Face on launch day.
July 28, 2025

Alibaba’s Qwen Team Unleashes Qwen-MT, Teases Wan 2.2 in Breakneck Model Rollout

Qwen‑MT brings instruction-tuned translation across 92 languages, built on Qwen3. With daily model drops and Wan 2.2 on the horizon, Qwen is pushing a relentless pace in open AI development.
July 24, 2025

Alibaba Releases Qwen3‑Coder, a 480B-Parameter Open-Source Model for Autonomous Coding

The Qwen team at Alibaba has launched Qwen3‑Coder, a massive open-source AI model designed for complex coding workflows. With 480 billion parameters and support for a 1 million token context window, the model challenges proprietary leaders like Claude Sonnet 4 while remaining freely available under Apache 2.0.
July 22, 2025

Alibaba’s Qwen3‑235B Instruct Model Gets Major Upgrade With Longer Context and Sharper Reasoning

Alibaba Cloud has released a refined version of its flagship Qwen3‑235B model, specifically tuned for instruction-following tasks. The update separates the non-thinking mode, expands context length up to 1 million tokens, and delivers noticeable gains in reasoning, coding, and alignment benchmarks. The move simplifies deployment decisions for developers while solidifying Qwen’s standing in the open-source large language model arena.
July 21, 2025
View all

Never miss an update!

Subscribe for news, curated content, and special offers.

By clicking Subscribe Now you're confirming that you agree with our Terms & Conditions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
AI Lab
For IndividualsFor BusinessFor EnterprisePricing
Tools
Model Database
Writings
AI News
Privacy PolicyTerms & ConditionsDisclaimerCookie PolicyEULA
Build with ♥️ in Berlin, New York, and Vienna.
© 2025 Neo Digital Magazines llc. All rights reserved.