What Is MAAS Betting On? An Overlooked AI Profit Logic

CHcapital

Newbie
Messages
3
Likes
0
In today’s increasingly heated U.S.-China AI competition, our headlines are bombarded daily with reports on top tech companies and their massive models. However, if you’re an investor who truly cares about commercial monetization, it’s time to shift your focus away from the "race of big models" spotlight.

In the deep end of the commercial world, most ordinary enterprises or institutions don’t need an all-knowing, infinite-power "Einstein-level" AI that consumes vast computational power. What they need is a "golden assistant"—an AI that doesn’t leak data, is cost-effective, and can help improve daily work efficiency by doing the grunt work.

This is the massive discrepancy in expectations within current AI, and it’s exactly the blue ocean market that MAAS, a company I’ve recently been watching, is quietly capitalizing on. Let’s break down the economics of large models in simple investment terms.



Understanding What Kind of AI Is the Most Profitable​

To understand which AI is the most profitable, we need to first grasp the concept of "parameter scale." You can roughly classify large models into a few tiers:

● Top players (>100B/Trillions of parameters): Models like GPT-4 are incredibly powerful, but their reasoning (day-to-day use) costs are astronomical. Running them requires massive A100/H100 compute clusters—essentially "money-burning machines."

● Lightweight and practical models (7B-13B parameters): The "B" here stands for Billion. A 7B model means a model with 7 billion parameters.

Why is the 7B model considered the "king of cost-effectiveness"?

The answer is simple: it has a very low hardware threshold and the "just right" level of ability. To deploy a trillion-parameter model, companies might need to spend hundreds of thousands or even millions on servers. But a 7B model, after compression and quantization, can run smoothly on a regular A100 graphics card—or even on a consumer-grade RTX 4090-equipped PC.

For businesses, what truly matters isn’t how smart the model is, but the Cost per Task (the cost of completing a task) and stability. The reason the 7B model has commercial value isn’t because it’s "strong enough" but because it’s "good enough" and can scale to a cost-effective deployment range.

More importantly, there’s a consensus in the industry: "Fine-tuning > Parameters." The fundamental reason is that large models' general capabilities come from pretraining, while what enterprises need are highly structured, clearly defined, 'domain-specific knowledge'." In these scenarios, high-quality data and fine-tuning are often more effective than blindly increasing parameters.

Don’t underestimate the 7B model. As long as it’s fed high-quality vertical industry data (e.g., government documents, financial reports, medical guidelines) and finely tuned, it can perform just as well or even outperform a giant model that hasn’t been properly tuned. This is the perfect balance of "good enough + low cost."

The 'Lingyan Miaoyu' large model developed by Huazhi Future, a subsidiary of MAAS, precisely targets this 7B sweet spot. It doesn’t aim for the illusory "omniscient" AI, but instead focuses on achieving the highest return on investment in specific scenarios such as government affairs, urban management, and security.



Data Security: The Biggest Obstacle​

In addition to cost, what is the biggest stumbling block for the widespread adoption of large models? It’s data security.

Two years ago, the departure of Ilya Sutskever, co-founder and former chief scientist of OpenAI, sent shockwaves through the tech world. He went on to create a new company, SSI (Safe Superintelligence), with a core belief: before pursuing more powerful AI, its absolute security must be guaranteed.

Today in China, AI development is embraced by all, but for large state-owned enterprises and local governments that control critical national resources, their biggest concern before using AI is data security.

For public security systems, public hospitals, and major state-owned enterprises, data sovereignty is a non-negotiable red line.

These entities would never dare upload sensitive data like citizens' privacy, city surveillance, or financial flows to public cloud-based large model APIs. Their core demand is very clear: the model must be safe and controllable, and it must support fully localized "private deployment"—that is, it must work even offline, and data must never leave the premises.

This immense "security + intelligence" demand from government and enterprise customers has given rise to a batch of AI application companies that specifically serve the G-side (government) and B-side (large enterprises), focusing on "strong data security and private delivery." MAAS’s acquisition of Huazhi Future is a key player in this market.

1775647998512.png
1775649635545.png


"Lingyan Miaoyu" and Its Competitive Edge​

Huazhi Future’s fully self-developed "Lingyan Miaoyu" large model not only enables low-cost local private deployment for clients but, more crucially, it has high official compliance credentials. It was officially approved by China’s National Internet Information Office (Cyberspace Administration) in November 2025 and is the first large model approved in the Yuzhong District of Chongqing.

For the B2G (government) market, these security compliance credentials are a thousand times more important than ranking on performance leaderboards.



Currently, Huazhi Future’s AI system is helping local public security departments in certain cities monitor video footage 24/7, accurately identifying and flagging various violations. Whether it's illegal parking, improper bicycle parking, illegal outdoor advertisements, drying clothes on the street, or overflowing trash cans, the system can instantly recognize violations, issue alerts, and send work orders to nearby law enforcement.

This system no longer relies on traditional, human-monitored 'video surveillance', but a "visual + language model" multi-modal intelligent agent with logical reasoning and event classification capabilities.



In terms of public safety and security, Huazhi Future’s system is being applied to detect abnormal behavior in special scenarios: for example, identifying illegal gatherings or disruptive personnel near government buildings, detecting dangerous weapons near schools, or identifying intoxicated or fighting individuals near entertainment venues. The system can even issue early warnings of abnormal groupings of people involved in drug or sex-related activities.

These systems turn massive unstructured video data into structured intelligence on public safety and urban management, greatly improving the efficiency of grassroots governance for the government.

The key takeaway is that the B2G market isn’t about technology competition, but rather "credentials + relationships + project experience" as a combined barrier. Once a company enters the local government system, it gains a significant first-mover advantage and strong customer stickiness.

1775649730097.png


The Future AI Competition​

From ChatGPT, Gemini, and Claude to DeepSeek, Kimi, and Qwen, these are the well-known large models for consumer market. In the future, AI competition will clearly take on a tiered structure:

● The Consumer market determines the breadth of adoption.

● The Enterprise market/Government sector determines the depth of AI penetration into the real world and its potential to reshape national competitiveness.

And in this "deep water" space, what’s truly needed isn’t a super-powerful model but a set of secure, controllable, and deployable intelligent infrastructure. This is precisely the capability boundary MAAS is trying to build.

If we compare top-tier models like GPT to expensive "large computers," then Huazhi Future’s "Lingyan Miaoyu," a 7B secure model, is more like a "personal computer" deployed across thousands of industries, government departments, and even grassroots units.

AI’s first phase was a race for "capability limits," but the second phase will inevitably evolve into "engineering competition under cost and security constraints."

The models that will truly translate into tangible productivity and generate stable cash flow are not the smartest, but the most deployable.

Once you understand this, the significance of MAAS’s acquisition of Huazhi Future becomes crystal clear: they didn’t just acquire an experimental algorithmic capability, but a "security pass" to enter the government and enterprise market, along with a scalable, tested AI implementation system.



Reference:

1. The Small Model Revolution: When 7B Parameters Beat 70B - Stabilarity Hub

2. The Rise of Small LLMs: Why Companies Prefer 3B–7B Models in 2026

3. Transformer Architecture Explained (7B Parameters) | RAGyfied | RAGyfied

4. Small language models learn enhanced reasoning skills from medical textbooks
 

Attachments

  • 1775648150907.png
    1775648150907.png
    6.1 MB · Views: 2
  • 1775649424452.png
    1775649424452.png
    6.1 MB · Views: 2
Back
Top