Nvidia beats expectations with Q2 results
Chipmaker Nvidia beat market expectations with its second-quarter results and surprised analysts and investors with its guidance for the current quarter.
As reported by CNBC, Nvidia's revenue for the quarter ending July 30 stood at $13.51 billion versus $11.22 billion expected by Refinitiv. Net income jumped to $6.19 billion, or $2.48 a share, from $656 million, or 26 cents, a year earlier.
The CNBC report further added that "Nvidia expects fiscal third-quarter revenue of about $16 billion, higher than the $12.61 billion forecast by Refinitiv. Nvidia's guidance suggests sales in the current quarter will grow 170 per cent from the year-earlier period."
Besides, Nvidia said its board of directors authorized $25 billion in share buybacks after the company purchased $3.28 billion in shares during the quarter, the CNBC report added.
Nvidia's results show growing interest of investors in artificial intelligence (AI) which started with the arrival of OpenAI's ChatGPT language-generation tool. AI is growing rapidly and many companies now see AI as critical to their future growth.
Nvidia's strong quarterly sales numbers and guidance point out the significance of the company's graphics processing units (GPUs) for the generative AI boom. As per CNBC, "Nvidia's A100 and H100 AI chips are needed to build and run AI applications like OpenAI's ChatGPT and other services that take simple text queries and respond with conversational answers or images."
The CNBC report highlighted that Nvidia's data centre business, which includes AI chips, drove its stellar performance as large consumer internet companies and cloud service providers, including Alphabet, Amazon and Meta grabbed next-generation processors. The company reported $10.32 billion in revenue for the group, up 171 per cent year over year and above the $8.03 billion estimate, according to StreetAccount.
Nvidia's shares are the top performer in the S&P 500 index as they have more than tripled for the year.
Disclaimer: The views and recommendations above are those of individual analysts and broking companies, not of Mint. We advise investors to check with certified experts before taking any investment decisions.
Source: Live Mint
Related Posts: NVIDIA,NVIDIA RESULTS,NVIDIA SHARE BUYBACK,NVIDIA Q2 RESULTS,ARTIFICIAL INTELLIGENCE
Google takes first step to add plugins on Bard with own apps
On Tuesday, Google announced the latest addition to its generative artificial intelligence (AI) chatbot, Bard—adding the ability for users to link their entire suite of Google apps to the chatbot. By doing so, users will get the ability to pull information from their stored documents and spreadsheets, as well as tap public Google services such as Maps and YouTube, within Bard’s responses.
The move marks Google's first step towards keeping pace with rival tech firm OpenAI's chatbot, ChatGPT, in terms of the usage of plugins on the platform. However, speaking at a media roundtable, Amar Subramanya, vice-president of engineering at Google, refrained from offering a timeline on when the applicability of such plugins would be extended to third parties as well.
Plugins, which are mini versions of applications that can be integrated across different software platforms, offer a way for applications to communicate between each other. For instance, a video-conferencing application can use the plugin of a calendar app used by a user, in order to seamlessly integrate timing schedules for the latter.
On Tuesday, Google said the latest version of its Bard chatbot, which integrates its own apps as plugins, has also been upgraded to a new version of its underlying large language model (LLM), Pathways Language Model 2 (PaLM 2). The first rollout of plugins will be available only in English, while Bard conversations can also be shared between users to continue collaborative conversations.
Subramanya also confirmed that the new version of Bard will now offer image-based queries and responses in over 40 languages—support for which Google had announced on 13 July. Indian languages included in this support are Bangla, Gujarati, Hindi, Kannada, Malayalam, Marathi, Tamil, Telugu and Urdu.
While this brings support for popular Google services, Bard continues to trail behind OpenAI's ChatGPT. The latter started rolling out support for third-party plugins from 23 March, and on 28 August it also introduced ChatGPT Enterprise. The latter claimed to offer privacy of data to paying enterprise customers in deploying ChatGPT for internal or business use cases, with the company stating in a blog post that company data will not be used to train its underlying LLMs.
Google, too, made a similar claim with its Tuesday announcement. In a blogpost, Yury Pinsky, director of product management at Bard, added that despite integrating plugins of content stored in users' email accounts—which come with massive privacy implications—Bard will not use the data to show ads or train the underlying LLM. “If you choose to use the Workspace extensions, your content from Gmail, Docs and Drive is not seen by human reviewers, used by Bard to show you ads or used to train the Bard model…You're always in control of your privacy settings when deciding how you want to use these extensions," he said.
Subramanya, at the roundtable, added, “We are very transparent with users in terms of what data gets collected, and giving them control over the data."
Source: Live Mint
Related Posts: GOOGLE,BARD,CHATGPT,AI,GENERATIVE ARTIFICIAL INTELLIGENCE,AI CHATBOT,GOOGLE APPS
Could OpenAI be the next tech giant
The creation of a new market is like the start of a long race. Competitors jockey for position as spectators excitedly clamour. Then, like races, markets enter a calmer second phase. The field orders itself into leaders and laggards. The crowds thin.
In the contest to dominate the future of artificial intelligence, OpenAI, a startup backed by Microsoft, established an early lead by launching ChatGPT last November. The app reached 100m users faster than any before it. Rivals scrambled. Google and its corporate parent, Alphabet, rushed the release of a rival chatbot, Bard. So did startups like Anthropic. Venture capitalists poured over $40bn into AI firms in the first half of 2023, nearly a quarter of all venture dollars this year. Then the frenzy died down. Public interest in AI peaked a couple of months ago, according to data from Google searches. Unique monthly visits to ChatGPT's website have declined from 210m in May to 180m now (see chart).
The emerging order still sees OpenAI ahead technologically. Its latest AI model, GPT-4, is beating others on a variety of benchmarks (such as an ability to answer reading and maths questions). In head-to-head comparisons, it ranks roughly as far ahead of the current runner-up, Anthropic's Claude 2, as the world's top chess player does against his closest rival—a decent lead, even if not insurmountable. More important, OpenAI is beginning to make real money. According to The Information, an online technology publication, it is earning revenues at an annualised rate of $1bn, compared with a trifling $28m in the year before ChatGPT's launch.
Can OpenAI translate its early edge into an enduring advantage, and join the ranks of big tech? To do so it must avoid the fate of erstwhile tech pioneers, from Netscape to Myspace, which were overtaken by rivals that learnt from their early successes and stumbles. And as it is a first mover, the decisions it takes will also say much about the broader direction of a nascent industry.
OpenAI is a curious firm. It was founded in 2015 by a clutch of entrepreneurs including Sam Altman, its current boss, and Elon Musk, Tesla's technophilic chief executive, as a non-profit venture. Its aim was to build artificial general intelligence (AGI), which would equal or surpass human capacity in all types of intellectual tasks. The pursuit of something so outlandish meant that it had its pick of the world's most ambitious AI technologists. While working on an AI that could master a video game called “Dota", they alighted on a simple approach that involved harnessing oodles of computing power, says an early employee who has since left. When in 2017 researchers at Google published a paper describing a revolutionary machine-learning technique they christened the “transformer", OpenAI's boffins realised that they could scale it up by combining untold quantities of data scraped from the internet with processing oomph. The result was the general-purpose transformer, or GPT for short.
Obtaining the necessary resources required OpenAI to employ some engineering of the financial variety. In 2019 it created a “capped-profit company" within its non-profit structure. Initially, investors in this business could make 100 times their initial investment—but no more. Rather than distribute equity, the firm distributes claims on a share of future profits that come without ownership rights (“profit-participation units"). What is more, OpenAI says it may reinvest all profits until the board decides that OpenAI's goal of achieving AGI has been reached. OpenAI stresses that it is a “high-risk investment" and should be viewed as more akin to a “donation". “We're not for everybody," says Brad Lightcap, OpenAI's chief operating officer and its financial guru.
Maybe not, but with the exception of Mr Musk, who pulled out in 2018 and is now building his own AI model, just about everybody seems to want a piece of OpenAI regardless. Investors appear confident that they can achieve venture-scale returns if the firm keeps growing. In order to remain attractive to investors, the company itself has loosened the profit cap and switched to one based on the annual rate of return (though it will not confirm what the maximum rate is). Academic debates about the meaning of AGI aside, the profit units themselves can be sold on the market just like standard equities. The firm has already offered several opportunities for early employees to sell their units.
SoftBank, a risk-addled tech-investment house from Japan, is the latest to be seeking to place a big bet on OpenAI. The startup has so far raised a total of around $14bn. Most of it, perhaps $13bn, has come from Microsoft, whose Azure cloud division is also furnishing OpenAI with the computing power it needs. In exchange, the software titan will receive the lion's share of OpenAI's profits—if these are ever handed over. More important in the short term, it gets to license OpenAI's technology and offer this to its own corporate customers, which include most of the world's largest companies.
It is just as well that OpenAI is attracting deep-pocketed backers. For the firm needs an awful lot of capital to procure the data and computing power necessary to keep creating ever more intelligent models. Mr Altman has said that OpenAI could well end up being “the most capital-intensive startup in Silicon Valley history". OpenAI's most recent model, GPT-4, is estimated to have cost around $100m to train, several times more than GPT-3.
For the time being, investors appear happy to pour more money into the business. But they eventually expect a return. And for its part Openai has realised that, if it is to achieve its mission, it must become like any other fledgling business and think hard about its costs and its revenues.
GPT-4 already exhibits a degree of cost-consciousness. For example, notes Dylan Patel of SemiAnalysis, a research firm, it was not a single giant model but a mixture of 16 smaller models. That makes it more difficult—and so costlier—to build than a monolithic model. But it is then cheaper to actually use the model once it has been trained. because not all the smaller models need be used to answer questions. Cost is also a big reason why OpenAI is not training its next big model, GPT-5. Instead, say sources familiar with the firm, it is building GPT-4.5, which would have “similar quality" to GPT-4 but cost “a lot less to run".
But it is on the revenue-generating side of business that OpenAI is most transformed, and where it has been most energetic of late. AI can create a lot of value long before AGI brains are as versatile as human ones, says Mr Lightcap. OpenAI's models are generalist, trained on a vast amount of data and capable of doing a variety of tasks. The ChatGPT craze has made OpenAI the default option for consumers, developers and businesses keen to embrace the technology. Despite the recent dip, ChatGPT still receives 60% of traffic to the top 50 generative-AI websites, according to a study by Andreessen Horowitz, a venture-capital (VC) firm which has invested in OpenAI (see chart).
Yet OpenAI is no longer only—or even primarily—about ChatGPT. It is increasingly becoming a business-to-business platform. It is creating bespoke products of its own for big corporate customers, which include Morgan Stanley, an investment bank. It also offers tools for developers to build products using its models; on November 6th it is expected to unveil new ones at its first developer conference. And it has a $175m pot to invest in smaller AI startups building applications on top of its platform, which at once promotes its models and allows it to capture value if the application-builders strike gold. To further spread its technology, it is handing out perks to AI firms at Y Combinator, a Silicon Valley startup nursery that Mr Altman used to lead. John Luttig of Founders Fund (a VC firm which also has a stake in OpenAI), thinks that this vast and diverse distribution may be even more important than any technical advantage.
Being the first mover certainly plays in OpenAI's favour. GPT-like models' high fixed costs erect high barriers to entry for competitors. That in turn may make it easier for OpenAI to lock in corporate customers. If they are to share internal company data in order to fine-tune the model to their needs, many clients may prefer not to do so more than once—for cyber-security reasons, or simply because it is costly to move data from one AI provider to another, as it already is between computing clouds. Teaching big models to think also requires lots of tacit engineering know-how, from recognising high-quality data to knowing the tricks to quickly debug the source code. Mr Altman has speculated that fewer than 50 people in the world are at the true model-training frontier. A lot of them work for OpenAI.
These are all real advantages. But they do not guarantee OpenAI's continued dominance. For one thing, the sort of network effects where scale begets more scale, which have helped turn Alphabet, Amazon and Meta into quasi-monopolists in search, e-commerce and social networking, respectively, have yet to materialise. Despite its vast number of users, GPT-4 is hardly better today than it was six months ago. Although further tuning with user data has made it less likely to go off the rails, its overall performance has changed in unpredictable ways, in some cases for the worse.
Being a first mover in model-building may also bring some disadvantages. The biggest cost for modellers is not training but experimentation. Plenty of ideas went nowhere before the one that worked got to the training stage. That is why OpenAI is estimated to have lost $500m last year, even though GPT-4 cost one-fifth as much to train. News of ideas that do not pay off tends to spread quickly throughout AI world. This helps OpenAI's competitors avoid going down costly blind alleys.
As for customers, many are trying to reduce their dependence on OpenAI, fearful of being locked into its products and thus at its mercy. Anthropic, which was founded by defectors from OpenAI, has already become a popular second choice for many AI startups. Soon businesses may have more cutting-edge alternatives. Google is building Gemini, a model believed to be more powerful than GPT-4. Even Microsoft is, despite its partnership with OpenAI, something of a competitor. It has access to GPT-4's black box, as well as a vast sales force with long-standing ties to the world's biggest corporate IT departments. This array of choices diminishes OpenAI's pricing power. It is also forcing Mr Altman's firm to keep training better models if it wants to stay ahead.
The fact that OpenAI's models are a black box also reduces its appeal to some potential users, including large businesses concerned about data privacy. They may prefer more transparent “open-source" models like Meta's LLaMA 2. Sophisticated software firms, meanwhile, may want to build their own model from scratch, in order to exercise full control over its behavour.
Others are moving away from generality—the ability to do many things rather than just one thing—by building cheaper models that are trained on narrower sets of data, or to do a specific task. A startup called Replit has trained one narrowly to write computer programs. It sits atop Databricks, an AI cloud platform which counts Nvidia, a $1trn maker of specialist AI semiconductors, among its investors. Another called Character AI has designed a model that lets people create virtual personalities based on real or imagined characters that can then converse with other users. It is the second-most popular AI app behind ChatGPT.
The core question, notes Kevin Kwok, a venture capitalist (who is not a backer of OpenAI), is how much value is derived from a model's generality. If not much, then the industry may be dominated by many specialist firms, like Replit or Character AI. If a lot, then big models such as those of OpenAI or Google may come out on top.
Mike Speiser of Sutter Hill Ventures (another non-OpenAI backer) suspects that the market will end up containing a handful of large generalist models, with a long tail of task-specific models. If AI turns out to be all it is cracked up to be, being an oligopolist could still earn OpenAI a pretty penny. And if its backers really do see any of that penny only after the company has created a human-like thinking machine, then all bets are off.
© 2023, The Economist Newspaper Limited. All rights reserved. From The Economist, published under licence. The original content can be found on www.economist.com
Source: Live Mint
Related Posts: OPENAI,CHATGPT,TECH GIANT,NEXT TECH GIANT,MICROSOFT,ARTIFICIAL INTELLIGENCE,GENERATIVE AI,GOOGLE BARD,SAM ALTMAN,AI MODEL,NETSCAPE,MYSPACE,ELON MUSK,AI BOOM,LARGE LANGUAGE MODELS
Digital India Experience Zone
The ‘Digital India Experience Zone' at the upcoming G20 Leaders Summit in New Delhi, will be a standout attraction showcasing India's ‘digital' journey since 2014. The transformative potential of digital public infrastructure, or DPI, will be highlighted through seven platforms developed in the country to boost the digital economy – Aadhaar, UPI, DigiLocker, DIKSHA, Bhashini, ONDC and eSanjeevani.
Apart from offering visitors an immersive experience like interactive displays and virtual reality simulations, the one-of-a-kind zone will have a kiosk introducing the GITA (Guidance Inspiration Transformation and Action) application, where visitors can seek answers to questions related to life and principles in alignment with the Bhagavad Gita.
“The seven DPI components will be showcased in the ‘Experience Zone' along India's digital journey wall, in front of which there will be a cycle. The visitors can ride it to see the visual representation of the digital journey and, as long as they paddle, the video will move forward,” an official said.
The official added: “Similarly, if they visit the Bhashini booth they can use the technology to translate any language. Even though the Bhashini platform supports Indian languages, for the G20 purpose, it has been connected to international language as well. The visitors can converse with the platform to seek information related to tourism or G20 or any other topic.”
Visitors can explore major milestones of ‘Digital India' through simulative virtual reality, effectively bringing to life advancements in the digital space in the last few years. This zone will give an insight into DPI's core principles and the evolution of ‘Digital India' initiatives.
One of the key highlights will be the interactive demonstration of the collaboration of ONDC (Open Network for Digital Commerce) with sellers, customers and network providers at a large scale. This exhibit is expected to illustrate ONDC's potential in transforming India's digital commerce landscape and fostering economic growth.
The GITA app will enable visitors to seek answers to questions in alignment with the Bhagavad Gita. The ‘Ask GITA' feature of the app is powered by advanced GPT-4 language model technology, which will answer questions by giving insights based on the holy book in English and Hindi.
“If a visitor asks a question to this AI model, GITA will respond according to the principles of Bhagavad Gita and what the holy book thinks about the question asked, with quotes,” an official said.
Officials further said the ‘Digital India Experience Zone' has been meticulously designed to resonate with its target audience, and underscore India's commitment to fostering digital inclusivity and innovation in its development agenda.
Related Posts: AADHAAR,ARTIFICIAL INTELLIGENCE,BHAGAVAD GITA,DIGILOCKER,DIGITAL INDIA,ESANJEEVANI,G20,UPI,VIRTUAL REALITY
Arm Targets More Than $52 Billion Valuation in Largest IPO of the Year
British chip designer Arm Ltd. is targeting a valuation of more than $52 billion from what is expected to be the largest initial public offering this year, according to the company’s latest regulatory filing.
SoftBank Group, Arm's owner, plans to sell roughly 10% of the total shares outstanding, setting a share sale price of between $47 to $51 apiece. The Securities and Exchange Commission filing comes as Arm's management hits the road starting Tuesday to meet with prospective investors to solicit support for the offering.
The offering, expected to be the largest of the year, is an important test case for the sustainability of the recent revival in the IPO market because of its considerable size. It will follow the successful, but smaller issues, in June by restaurant chain Cava Group and in July by Oddity Tech, a direct-to-consumer seller of makeup brands.
Several technology companies, many of which are Arm customers, have indicated that they could buy a total of $735 million worth of stock in the offering, according to Tuesday's filing. That would represent a tiny fraction of the chip designer's expected value. Still, it could signal a vote of confidence in the business for other institutional investors that might consider participating in the IPO.
Advanced Micro Devices, Apple, Intel, Nvidia and Samsung Electronics are among the companies that plan to buy shares in the IPO, according to the filing.
Arm's target valuation of $48 billion to more than $52 billion is slightly lower than the $50 to $55 billion range reported by The Wall Street Journal last week. The range also falls short of the $64 billion value implied by SoftBank's recent deal to buy the remaining 25% stake in Arm from its Vision Fund unit.
People close to the deal say the lower target valuation isn't set in stone. They expect strong demand on the chip designer's roadshow to push the price higher. In many highly anticipated IPOs, companies and their underwriters start with a lower target valuation and go on to price far higher.
Final pricing is expected as soon as some time next week followed by Arm's stock market debut on the Nasdaq exchange, people familiar with the plan have said.
The offering gives SoftBank a way to sell down its position in the chip designer over time. If the stock goes up in coming months it could provide a bigger return. It also provides SoftBank with fresh capital to restart its wide-ranging investments in tech startups. The company recently said it wants to renew its push for large-scale investments in artificial intelligence.
Write to Ben Dummett at firstname.lastname@example.org
Source: Live Mint
Related Posts: ARM,IPO,APPLE,INTEL,NVIDIA,BRITISH CHIP MAKER
ChatGPT Creator Releases Guide For Teachers Using Generative AI To Teach Students Published 2 hours ago
Microsoft-owned OpenAI has released a new guide for teachers using its AI chatbot ChatGPT to assist educators in effectively incorporating the generative AI tool into their students' learning.
The newly released guide suggested prompts, an explanation of how ChatGPT works and its limitations, the efficacy of AI detectors, and a discussion on biases.
“We're releasing a guide for teachers using ChatGPT in their classroom — including suggested prompts, an explanation of how ChatGPT works and its limitations, the efficacy of AI detectors, and bias," OpenAI said in a blogpost.
On its announcement blog, the company shared examples of how professors and teachers are already using the chatbot to aid in their teaching.
ChatGPT has already proved to be a useful tool for teachers, enabling them to create quizzes, tests, lesson plans, and even role play challenging conversations.
At the American International School in Chennai, India, Geetha Venugopal compares teaching students about AI tools to teaching them how to use the internet responsibly.
“In her classroom, she advises students to remember that the answers that ChatGPT gives may not be credible and accurate all the time, and to think critically about whether they should trust the answer, and then confirm the information through other primary resources," OpenAI mentioned in the post.
The goal is to help them “understand the importance of constantly working on their original critical thinking, problem-solving and creativity skills".
Meanwhile, OpenAI has launched a business-focused edition of the company's AI-powered chatbot app, ChatGPT Enterprise, which will offer enterprise-grade security and privacy, unlimited higher-speed GPT-4 access, longer context windows for processing longer inputs, advanced data analysis capabilities, customisation options, and much more.
According to the company, ChatGPT Enterprise is SOC 2 compliant and all conversations are encrypted in transit and at rest.
Related Posts: CHATGPT,OPENAI,GENERATIVE AI,AI,ARTIFICIAL INTELLIGENCE,A.I. ARTIFICIAL INTELLIGENCE
AI could fortify big business
Since ChatGPT took the world by storm last year, the internet has been littered with predictions of just how disruptive “generative" artificial intelligence (AI) will be. “Entire industries will reorient around it," enthused Bill Gates in a blog post earlier this year, in which he declared the technology to be as disruptive as the internet and the microprocessor. From media and education to law and health care, vast areas of human endeavour are expected to be turned upside down.
You may think that the losers from all this would be crusty old incumbents, rather as Kodak and Blockbuster were felled during past waves of technological upheaval. And, sure enough, a new wave of startups has sensed the chance to gain a foothold, crashing onto the scene with ai-powered legal chatbots, virtual doctors, writing assistants and so on. Some of these will make up a new industry of model-builders and innovators that soar to lofty valuations, rather as today's tech giants ascended during the internet age. In the rest of the economy, however, it is far from clear that the upheaval will consign today's corporate Goliaths to history. ai looks as likely to fortify reigning champions as to uproot them.
One reason for this is incumbents' advantages in distribution. That can help the giants maintain their dominance, even if they do not dream up the technology in the first place. Having paired with OpenAI, the creator of ChatGPT, for instance, Microsoft is souping up its ubiquitous Office software with AI features that let workers automate tasks such as writing emails and summarising documents. That will leave little space for rival upstarts. Salesforce and Zendesk, makers of software for sales reps and call-centre agents, respectively, are likewise embedding AI features in their tools. Whereas most companies may not be comfortable turning to a chatbot from an unknown startup for legal advice, they may try a large law firm like Allen & Overy, which is using one to help its lawyers speed up mundane tasks.
Incumbents will also be helped by their access to proprietary datasets, which can be used to tailor AI models to specific markets. Bloomberg, a financial-data firm, has used its trove of information to train a chatbot to help with financial analysis. McKinsey, a consulting giant, has trained a bot on its corpus of intellectual property. Health-care providers could exploit their anonymised medical records, insurers their claims data, and media companies their archival film or print, putting them ahead of upstarts unable to draw on such data.
Another reason to doubt that AI will upend the pecking order relates to how models are accessed. Whereas e-commerce required retailers to create an entirely new infrastructure for selling online, much AI development today is done by model-builders such as OpenAI and tech giants, including Alphabet and Amazon. Retailers, banks and others can link those models to their systems. By making it speedier for incumbents to develop AI-infused offerings, that will limit the opportunity for nimbler newcomers.
A last reason to expect incumbents to prevail is history. Even during the technological upheaval of the past few decades, surprisingly few corporate giants were felled. Only 52 of the Fortune 500, America's largest companies by revenue, were created since 1990. A mere seven were born after Apple unveiled the first iPhone in 2007. By contrast, 280 were founded before America entered the second world war. The average age of the Fortune 500 has steadily risen over the past three decades, from 75 to 90, defying the idea that the pace of disruption has accelerated in the internet era.
Survival is not guaranteed, obviously. Those that dawdle in their adoption of AI will cede the advantage to faster rivals. Those that ignore it entirely may still go the way of Kodak or Blockbuster. For the Davids of the AI wave, however, the odds are nonetheless fearsome.
©️ 2023, The Economist Newspaper Limited. All rights reserved.
From The Economist, published under licence. The original content can be found on www.economist.com
Source: Live Mint
Related Posts: CHATGPT,ARTIFICIAL INTELLIGENCE,STARTUPS,CHATBOTS,OPENAI,BIG BUSINESS