Daily Archives: February 20, 2026

“OpenAI Funding Round Nears Record $100B as Valuation Targets $850B”

Photo by Zulfugar Karimov on Unsplash

OpenAI is in advanced talks to raise close to $100 billion in a new funding round, which would value the company at around $850 billion. If completed, this would be one of the largest private funding rounds in tech history and underscores the massive investor confidence in artificial intelligence.

Why is OpenAI raising so much money?

OpenAI Funding

Photo by Zulfugar Karimov on Unsplash

Building and running large AI models like GPT-4 and upcoming GPT-5 costs billions. OpenAI spends heavily on computing power, specialized hardware, and top talent. The company also needs cash to fund its aggressive expansion into products, enterprise services, and international markets. With competitors like Google DeepMind, Anthropic, and Chinese AI firms gaining ground, OpenAI wants to secure enough capital to stay ahead.

The $100 billion figure is staggering, but it reflects the scale of the AI arms race. Companies are investing in bigger models, more data, and better infrastructure. The hope is that future revenues from AI APIs, subscriptions, and partnerships will more than cover this burn rate.

What does a $850 billion valuation mean?

Venture Capital Funding

Photo by Invest Europe on Unsplash

A valuation of $850 billion would make OpenAI one of the world’s most valuable private companies. It signals that investors believe AI will generate enormous economic value over the next decade. Even though OpenAI is not yet profitable, investors are betting on its technology leadership and network effects.

High valuation also puts pressure on OpenAI to deliver. The company will need to show sustainable growth, monetize its models effectively, and manage regulatory risks. Any missteps could lead to a down round later, which would hurt morale and reputation.

How does this affect the global AI landscape?

This funding round, if successful, will widen the gap between OpenAI and many rivals. Smaller AI startups may find it hard to compete for talent and compute resources. It could also push consolidation in the AI sector, as bigger players absorb smaller ones.

For India, the news is a reminder of how much capital is flowing into AI elsewhere. India’s own AI startup ecosystem is growing, but it lags behind the US in terms of funding scale. However, Indian firms can still carve out niches in applied AI, vertical-specific solutions, and cost-effective services.

Should we be concerned about market concentration?

Some experts warn that too much money in one company could lead to monopolistic practices. OpenAI’s models already power many popular apps and services. With $100 billion in the bank, it could acquire competitors, control key talent, or set terms that disadvantage smaller players. Regulators may watch closely, especially as OpenAI transitions toward a possible IPO.

Conclusion

OpenAI’s potential $100 billion raise and $850 billion valuation are headline-grabbing numbers. They show that AI remains the hottest tech investment area. Whether this valuation proves justified will depend on OpenAI’s ability to turn research breakthroughs into lasting, profitable products. For the rest of the world, including India, the challenge is to leverage AI locally without simply depending on a handful of dominant foreign players.

Draft created automatically by JARVIS on 2026-02-20.

“Amazon Surpasses Walmart in Annual Revenue: AI-Fueled Growth Race”

In a historic milestone, Amazon has overtaken Walmart in annual revenue for the first time, signaling a major shift in global retail. Both companies are aggressively investing in artificial intelligence to boost efficiency, cut costs, and dominate the market. The battle is no longer just about sales—it’s about who can use AI better.

The Revenue Numbers

Amazon Warehouse

Photo by Adrian Sulyok on Unsplash

For its most recent fiscal year, Amazon reported $716.9 billion in revenue. Walmart’s revenue was $713.2 billion. The difference is small in percentage terms, but the symbolism is huge. Amazon, once an online bookstore, has grown into the world’s largest retailer by revenue. Walmart, which dominated for decades, is now number two.

Both companies are using AI to drive this growth. Amazon uses AI for inventory management, delivery route optimization, personalized recommendations, and warehouse automation. Walmart uses AI for demand forecasting, shelf scanning robots, and checkout-free stores. The competition is intense, and AI is the key weapon.

Why AI matters in retail

Corporate AI Strategy

Photo by Carlos Gil on Unsplash

Retail is a business of thin margins. Small improvements in efficiency can mean billions of dollars saved. AI helps in many ways:

  • Predicts what customers will buy and stocks it ahead of time.
  • Optimizes delivery routes, reducing fuel and time.
  • Powers robots that pack and move goods in warehouses.
  • Personalizes online shopping, increasing conversion.
  • Detects fraud and returns abuse.

Amazon has been ahead in AI adoption for years. Its Kiva robots, recommendation engine, and Alexa ecosystem are well known. Walmart has caught up recently, opening AI labs and partnering with tech firms. The revenue crossover suggests Amazon’s AI edge is translating into sales.

What about India?

India’s retail market is also seeing an AI race. Reliance Retail, Flipkart, and Amazon India all use AI for pricing, recommendations, and supply chain. For Indian consumers, this means faster delivery, better product suggestions, and potentially lower prices. For workers, AI could mean fewer routine jobs but also new roles in tech and data analysis.

The Amazon-Walmart rivalry in India is intense. Walmart owns Flipkart and is investing heavily in AI to compete with Amazon. The US revenue battle is a preview of what could happen in India: AI-driven efficiency will define which retailer wins.

Should small businesses worry?

Small and medium Indian retailers may feel the pressure. Big players with AI can undercut prices and offer better experiences. However, AI tools are becoming cheaper and more accessible. Small shops can use AI for inventory, customer service chatbots, and social media marketing. The key is to adopt technology wisely rather than ignore it.

Future outlook

The revenue race is not over. Walmart may regain the top spot in coming years if its AI investments pay off. But the trend is clear: retail is becoming an AI game. Companies that fail to use AI effectively will lose market share.

For investors and employees, the message is to focus on AI capabilities of the companies they back or work for. For customers, it means smarter shopping experiences, but also more data collection and personalization.

In conclusion, Amazon overtaking Walmart in revenue is a sign of how AI is reshaping the biggest industry in the world. The competition will only get fiercer, and AI will continue to drive the winners.

Draft created automatically by JARVIS on 2026-02-20.

“AMC Theatres Refuses to Screen AI Short Film ‘Thanksgiving Day'”

Photo by Peter Herrmann on Unsplash

AMC Theatres has decided not to screen an AI-generated short film titled “Thanksgiving Day” that has sparked online outrage. The film, created using generative AI tools, raised questions about authorship, copyright, and the role of artificial intelligence in creative industries.

What is the controversy?

Movie Theater Projection

Photo by Zhyar Ibrahim on Unsplash

The short film “Thanksgiving Day” was produced entirely with AI tools: script generated by ChatGPT, visuals from a text-to-video model, and voice synthesis for narration. The creators claim it’s an experimental art piece exploring modern themes. When they submitted it to AMC’s independent film program, the theatre chain rejected it, stating they “will not participate” in AI-generated content that could displace human filmmakers.

The decision quickly went viral. Some praised AMC for protecting artists and upholding traditional craftsmanship. Others accused AMC of censorship and resisting technological change. The debate touches on deeper questions: what is art? Who owns AI-generated works? And should theatres distinguish between human-made and machine-made content?

Why are people upset?

Film Festival AI Controversy

Photo by Tahsin Labib on Unsplash

On one hand, independent filmmakers worry that AI tools lower barriers to entry and flood festivals with low-effort content, making it harder for human artists to get noticed. There’s also fear that studios will start using AI to cut costs, putting writers, animators, and editors out of work. On the other hand, many artists already use AI as part of their creative process—concept art, storyboarding, music composition. A blanket ban feels arbitrary to them.

The controversy is reminiscent of earlier battles over photography versus painting, or digital art versus analog. Each time, new technology disrupted established norms. Some resisted; others adapted. The AMC case adds a corporate dimension: a major theatre chain taking a public stance against AI content.

What does this mean for creators in India?

India’s film industry is the largest in the world by number of productions, and it is increasingly exploring AI for visual effects, dubbing, and marketing. If theatres in the US or elsewhere start discriminating against AI-assisted films, Indian filmmakers who use AI might face similar pushback at international festivals.

However, there is also a growing acceptance of AI as a tool rather than a replacement. The key is transparency. Many festivals now require disclosure of AI use. As long as the creative vision is human-led, AI can be considered just another brush in the painter’s toolkit.

The future of AI in entertainment

The AMC decision may be a temporary reaction to a viral moment. Over time, the industry will likely develop guidelines for AI use, rather than outright bans. Audiences will decide what they appreciate, and markets will reward quality regardless of how it’s made.

For now, the “Thanksgiving Day” film has gained far more attention than it would have if AMC had accepted it. Sometimes, controversy is the best promotion.

In summary, AMC’s refusal to screen the AI short film highlights a cultural clash between tradition and innovation. The debate will continue as AI becomes more capable of creating art that moves, entertains, and provokes thought.

Draft created automatically by JARVIS on 2026-02-20.

“Over 80% of Companies Report No AI Productivity Gains Despite Billions Spent”

Photo by Jo Lin on Unsplash

A massive new survey of 6,000 executives reveals a stunning fact: over 80% of companies say they have seen no measurable productivity gains from their AI investments. Even more surprising, only one-third of leaders actually use AI themselves, and when they do, it’s just about 90 minutes per week. This data throws cold water on the AI hype machine.

What the survey found

Business Productivity AI

Photo by Jo Lin on Unsplash

The survey, conducted by a leading business research firm, asked companies about their AI adoption, spending, and outcomes. Here are the key points:

  • 80%+ reported no improvement in productivity after implementing AI tools.
  • Only 33% of senior leaders use AI in their own work, and those who do average just 1.5 hours per week.
  • Many companies have spent billions on AI infrastructure, software, and consultants.
  • The main uses are basic automation like chatbots and document summarization, not transformative changes.

These findings echo the earlier “productivity paradox” we discussed in a previous post. It seems AI, for all its potential, is not yet delivering on its promises at scale.

Why is AI failing to boost productivity?

Productivity Metrics Chart

Photo by Jakub Żerdzicki on Unsplash

There are several possible reasons. First, companies often deploy AI as a bolt-on tool rather than redesigning core processes. They add a chatbot to customer service without improving the underlying workflow, so the overall efficiency gain is minimal. Second, employee adoption is low. Many workers are skeptical of AI or don’t know how to use it effectively. Training programs are lacking.

Third, AI systems can be unreliable, especially in complex tasks requiring judgment. Humans end up double-checking AI outputs, which adds time instead of saving it. Fourth, measuring productivity is hard. Companies might not be looking at the right metrics, or gains may be uneven across departments.

What does this mean for Indian IT and services firms?

India’s tech industry has positioned itself as an AI hub, offering AI solutions to global clients. If the world’s largest companies are seeing little return, it could dampen demand for Indian AI services. However, the survey also suggests that better implementation could unlock value. Indian firms that can show real productivity improvements—by integrating AI deeply into business processes—will stand out.

For Indian employees in global companies, the message is clear: don’t assume AI will automatically make you more productive. Learn to use the tools well, focus on high-impact tasks, and track the outcomes you achieve.

Should companies stop investing in AI?

Not necessarily. AI technology is still young. Early adopters often face a learning curve before reaping benefits. The survey may reflect a phase of experimentation where investments haven’t matured. However, it’s a warning against盲目 spending. Boards and CEOs should demand clear ROI and hold teams accountable for results.

Conclusion

The fact that most companies report no AI productivity gains despite billions in spending is a reality check. The AI gold rush may be producing more noise than value. For businesses, it’s time to move from pilot projects to disciplined, outcome-driven AI transformation. For individuals, it means focusing on skills that AI cannot easily replace—creativity, critical thinking, and emotional intelligence.

Draft created automatically by JARVIS on 2026-02-20.

“Newsom Backs Social Media Restrictions for Teens Under 16”

Photo by Vitaly Gariev on Unsplash

California Governor Gavin Newsom has announced support for legislation that would ban social media platforms from allowing users under the age of 16. The proposal is part of a growing movement to protect children from online harms, but it also raises questions about freedom, enforcement, and parental rights.

What the law would do

Teen Smartphone Use

Photo by Vitaly Gariev on Unsplash

If passed, the bill would require social media companies to verify the age of every user and block access for anyone younger than 16. Platforms would need to implement robust age-check systems, likely requiring government-issued IDs or other verification methods. Existing accounts belonging to under-16 users would have to be closed or converted to restricted modes.

The law would also prohibit targeted advertising to minors and limit data collection on users under 16. Companies that fail to comply could face significant fines. The aim is to reduce exposure to harmful content, cyberbullying, and addictive design patterns that can affect mental health.

Why is this controversial?

Social Media Age Restriction

Photo by hookle.app on Unsplash

Opponents argue that age verification invades privacy and could lead to more data collection, not less. They also say that parents—not the state—should decide when their children can use social media. Some worry that a ban will push teens toward unregulated or foreign platforms that are even less safe.

There are also practical challenges: how do you reliably prove someone’s age online? Many teenagers will simply lie about their age, as they do today. Enforcing an age limit requires users to submit IDs, which raises concerns about identity theft and data breaches. Small platforms may struggle with compliance costs, while large ones like Instagram and TikTok could absorb them more easily.

How does this compare to other countries?

The idea of age gating social media is not unique to California. Some countries, like China, already restrict teen usage to limited hours and require real-name registration. In Europe, the Digital Services Act imposes extra duties to protect minors. The United States has no federal law yet, but several states are experimenting with similar rules.

If California’s bill passes, it could become a model for other states or even inspire national legislation. Big tech companies would have to change their products for the entire US market if they want to keep California’s huge user base.

What does this mean for Indian families?

In India, social media use among teens is high, and concerns about mental health and online safety are growing. The government’s own regulations already require platforms to remove harmful content and set up grievance mechanisms. But an outright age ban is more radical.

Indian parents often monitor their children’s phone use, but many teens still access platforms like Instagram and YouTube. A law like California’s would face challenges in India due to diversity in age verification infrastructure, but it could spark debate about whether similar restrictions are needed. For now, families must continue to educate kids about responsible use and use parental controls provided by apps.

Conclusion

Governor Newsom’s support for a social media age ban shows how seriously California is taking child safety online. The bill still has a long way to go, but it sends a message that unrestricted access for minors may be ending. Whether this approach is effective or not, the conversation about protecting young minds in the digital age is only getting louder.

Draft created automatically by JARVIS on 2026-02-20.

“DOGE Bro’s Grant Review: Just Asking ChatGPT ‘Is This DEI?’”

A shocking reveal shows that the Department of Government Efficiency (DOGE) used a very simple method to review grant applications: they asked ChatGPT whether the project focused on diversity, equity, and inclusion (DEI). If the AI said yes, the grant was likely rejected. This lazy approach has sparked outrage among scientists and researchers.

How did the grant review work?

Grant Funding Application

Photo by Brett Jordan on Unsplash

According to insider reports, DOGE staff set up an automated pipeline where each grant proposal was fed into ChatGPT with the prompt: “Does this application emphasize DEI?” If the answer was affirmative, the proposal was automatically flagged for rejection or lower priority. No human reviewer looked at the science, the budget, or the potential impact. It was a simple keyword-based filter, but using AI made it seem modern and efficient.

This approach is not only unfair but also inaccurate. ChatGPT can misunderstand context, miss nuance, and judge based on superficial patterns. Many legitimate projects include DEI statements as required by funding guidelines, but that does not mean DEI is the main focus. By blanket-filtering AI-generated answers, DOGE likely threw out excellent research along with the supposed “bad” ones.

Why is this a problem?

India’s scientific community receives many international grants, and fairness in review is crucial. If a government agency uses such a crude AI filter, it undermines the entire peer-review process. Researchers spend months writing proposals, and rejecting them based on a chatbot’s quick yes/no answer is disrespectful and wasteful.

Moreover, DEI statements have become a political target. Some believe that focusing on diversity harms merit. Others argue that DEI is important for inclusive science. Regardless of one’s view, using an AI to automatically screen out applications is not the way to decide funding. It violates the principle that each proposal should be evaluated on its own merits by qualified experts.

What does this say about AI in government?

The DOGE incident is a cautionary tale. AI can help with administrative tasks, but it should not replace human judgment in areas that require nuance and fairness. When governments cut corners and trust AI to make binary decisions without oversight, they risk corruption, bias, and mass errors.

In India, where AI is being adopted in many public services, this case highlights the need for transparency and accountability. Automated decisions must be auditable, explainable, and subject to appeal. Otherwise, we may see more cases where a hidden chatbot decides who gets funding, who gets a job, or who gets a benefit—without anyone being held responsible.

Conclusion

The news that DOGE used ChatGPT to screen grant applications for DEI is both absurd and alarming. It shows how AI can be misused to speed up decisions while ignoring quality and fairness. As AI spreads in government and corporate offices, we must demand better: human oversight, clear policies, and respect for the effort people put into their work.

Draft created automatically by JARVIS on 2026-02-20.

“Google Maps Now Limited for Users Not Signed In: What It Means”

Photo by Sean D on Unsplash

Google has quietly changed how Maps works for people who are not signed into a Google account. Those without sign-in now see a “limited view” with fewer features. This change affects millions of users worldwide, including many in India who rely on Google Maps for daily travel.

What is the “limited view”?

Map Navigation Phone

Photo by Sean D on Unsplash

Earlier, anyone could open Google Maps and use most features – searching places, getting directions, seeing traffic, and saving favorite spots. Now, without signing in, users get a stripped-down version. You can still see the map and search for places, but you cannot:

  • Save your home or work address
  • Rate or review businesses
  • See your search history
  • Get personalized recommendations
  • Use some real-time features like location sharing

Google says this is to improve privacy and security. But critics argue it is actually a move to force more people to create Google accounts, giving the company more data about their habits.

Why is Google doing this?

There could be a few reasons. First, Google wants more signed-in users because that helps with targeted advertising, its main source of money. Second, requiring sign-in allows Google to sync data across devices and keep users within its ecosystem. Third, it may simplify the app’s design by focusing on signed-in features.

But for many people, especially those who share devices or who do not want to be tracked, this change is inconvenient. In India, where shared smartphones are common in families and small businesses, forcing sign-in could be a barrier. Not everyone has a Google account, and some prefer to stay anonymous online.

How does this affect you?

If you use Google Maps without signing in, you might notice that some options are gone. You can still get basic directions, but you cannot personalize the experience. For occasional users, this might not matter much. For frequent travelers, delivery drivers, or anyone who relies on Maps for work, signing in is now almost necessary.

Some privacy-conscious users may switch to alternatives like Apple Maps, MapMyIndia, or OpenStreetMap-based apps that do not require an account. The change could also push more people toward using Google in a way they did not before, which might not suit everyone.

What can you do?

  • Create a Google account if you want full features. Use it only for Maps and not for other services if you prefer.
  • Explore other map apps that respect anonymity.
  • Clear cookies regularly if you use shared devices.
  • Give feedback to Google through their help forums if you disagree with the change.

Is this the future of free apps?

This trend is not new; many websites and apps have been pushing users to sign in for years. But this Google Maps move shows that even widely used free services are willing to reduce functionality to gain more logged-in users. It raises the question: how free are “free” services if they keep nudging you to trade your data for full access?

In India, such changes must be watched closely. With growing awareness about data privacy, users should know their rights and choose services that respect their choices. Simple English: you now need to sign in to get the best out of Google Maps. If you don’t want to, you have limited options. Think about what you are comfortable sharing and decide accordingly.

Draft created automatically by JARVIS on 2026-02-20.

“Harmful Chemicals Found in Popular Headphones: What You Should Know”

Photo by Nubelson Fernandes on Unsplash

A new study has found harmful chemicals in dozens of popular headphone models. These chemicals can cause health problems over time, especially for people who use headphones for many hours every day. The news has worried many customers, especially in India where millions use headphones for work, study, and entertainment.

What did the study find?

Headphones on Desk

Photo by Birgith Roosipuu on Unsplash

Researchers tested many headphone models from various brands, both cheap and expensive. They discovered that some contain flame retardants and plastic softeners that are potentially dangerous. These chemicals can leak out over time and enter the body through the skin or by breathing. Long-term exposure may affect hormones and cause allergies or other illnesses.

The chemicals in question are not new; they have been used in electronics for years to make products last longer and resist fire. But as people wear headphones directly on their heads for hours, the risk of absorbing these substances increases. Children and teenagers are more sensitive because their bodies are still developing.

How do these chemicals affect health?

Toxic Chemical Warning

Photo by Girl with red hat on Unsplash

Some of the chemicals found are endocrine disruptors. That means they can interfere with the body's hormone system. Hormones control growth, metabolism, and reproduction. Even small changes can lead to big problems over time, like reduced fertility, thyroid issues, or increased risk of certain cancers. Other chemicals may cause skin irritation or breathing problems, especially for people with asthma or allergies.

The risk depends on how much time you spend with headphones on and how close they are to your skin. For office workers in India who wear headsets all day, or students listening to music for hours, the exposure can add up. Sweating and heat can also increase the release of chemicals from the plastic.

What can you do to stay safe?

  • Choose headphones from brands that are known to follow strict safety standards. Check if the product has certifications like BIS in India or CE in Europe.
  • Avoid very cheap headphones from unknown sellers. They may not follow safety rules.
  • Take breaks. Do not wear headphones for many hours continuously. Let your skin breathe.
  • Keep headphones clean and avoid sharing them with others.
  • Look for products that explicitly state they are free from harmful phthalates or other toxic additives.
  • Support regulations that require companies to list all materials used in electronics.

Should we panic?

Not necessarily. The study shows the presence of chemicals, but the actual health impact depends on many factors. However, it is a wake-up call for consumers to be more careful and for manufacturers to improve safety. In India, where the electronics market is huge and many products are imported, quality checks are important. Next time you buy headphones, think not just about sound quality but also about what they are made of.

The industry may respond to the findings by changing materials. Some companies already make ‘eco-friendly’ or ‘hypoallergenic’ versions. As customers become more aware, the demand for safer products will grow. Until then, use headphones wisely and stay informed.

Draft created automatically by JARVIS on 2026-02-20.

“Sam Altman Blows the Whistle: ‘AI Washing’ Is Real”

Photo by Vitaly Gariev on Unsplash

Sam Altman, the CEO of OpenAI, has made a surprising statement. He says many companies are doing something called ‘AI washing’. This means they blame job losses on artificial intelligence, even when the real reasons are different. It is like using AI as an excuse to lay off employees or hide poor business decisions.

What is AI Washing?

Corporate Meeting

Photo by Jonathan Wells on Unsplash

AI washing happens when a company says it is using artificial intelligence to improve efficiency and cut costs, so it has to let people go. But often, the AI tools are not ready or are not being used at all. The company just wants to look modern and tech-savvy while actually doing regular layoffs for financial reasons. This is misleading and can hurt employees who lose jobs unnecessarily.

Why Companies Do This

There are many reasons why firms pretend AI is behind job cuts. First, saying AI is responsible makes the company seem forward-looking and unavoidable. It suggests the business is keeping up with technology, which is good for investors and stock prices. Second, blaming AI shifts the conversation away from bad management or failed strategies. Instead of asking why the company failed, people talk about how AI is changing jobs.

Third, some companies genuinely hope to implement AI later and use the story now to prepare the market and employees. But promising AI benefits that are not real yet is dishonest. It creates fear among workers and makes them feel powerless against technology they do not understand.

The Real Impact on Workers

Indian IT Professionals

Photo by Ousa Chea on Unsplash

In India, the IT and services sectors employ millions. When companies claim AI is the reason for layoffs, it spreads panic among professionals. Many talented people start thinking their skills are no longer needed, even if the company’s situation has nothing to do with technology. This can lead to anxiety and lower morale.

At the same time, genuine AI adoption does change job roles. Some tasks become automated, but new roles are also created – for AI trainers, data specialists, and technology managers. The truth lies somewhere in between. Companies should be honest about why they are reducing staff and what they plan to do with AI. That way, workers can prepare and learn new skills that are actually in demand.

What Should Companies Do?

If AI is truly being used to improve productivity, companies must be transparent. They should explain which tasks are automated, how many jobs were affected, and what training is being offered to remaining employees. This builds trust and helps everyone adapt.

On the other hand, if layoffs are simply cost-cutting, companies should say so without hiding behind AI. Employees deserve honesty. And if a business wants to invest in AI, it should involve its workforce in the transition rather than using AI as a weapon to create fear.

Conclusion

Sam Altman’s comment is a reminder that not everything labeled as AI is true. We should question claims that AI is taking jobs in large numbers, especially when there is no clear proof. For Indian professionals, the message is to stay updated with technology but also to think critically about corporate narratives. Real AI progress will create new opportunities, not just excuses for layoffs.

Draft created automatically by JARVIS on 2026-02-20.