(Circuits)Over the past three decades, a handful of products like Netscape’s web browser, Google’s search engine and Apple’s iPhone have truly upended the tech industry and made what came before them look like lumbering dinosaurs.Three weeks ago, an experimental chatbot called ChatGPT made its case to be the industry’s next big disrupter. It can serve up information in clear, simple sentences, rather than just a list of internet links. It can explain concepts in ways people can easily understand. It can even generate ideas from scratch, including business strategies, Christmas gift suggestions, blog topics and vacation plans.Although ChatGPT still has plenty of room for improvement, its release led Google’s management to declare a “code red.” For Google, this was akin to pulling the fire alarm. Some fear the company may be approaching a moment that the biggest Silicon Valley outfits dread — the arrival of an enormous technological change that could upend the business.For more than 20 years, the Google search engine has served as the world’s primary gateway to the internet. But with a new kind of chatbot technology poised to reinvent or even replace traditional search engines, Google could face the first serious threat to its main search business. One Google executive described the efforts as make or break for Google’s future.ChatGPT was released by an aggressive research lab called OpenAI, and Google is among the many other companies, labs and researchers that have helped build this technology. But experts believe the tech giant could struggle to compete with the newer, smaller companies developing these chatbots, because of the many ways the technology could damage its business.Google has spent several years working on chatbots and, like other Big Tech companies, has aggressively pursued artificial intelligence technology. Google has already built a chatbot that could rival ChatGPT. In fact, the technology at the heart of OpenAI’s chatbot was developed by researchers at Google.Called LaMDA, or Language Model for Dialogue Applications, Google’s chatbot received enormous attention in the summer when a Google engineer, Blake Lemoine, claimed it was sentient. This was not true, but the technology showed how much chatbot technology had improved in recent months.Google may be reluctant to deploy this new tech as a replacement for online search, however, because it is not suited to delivering digital ads, which accounted for more than 80% of the company’s revenue last year.“No company is invincible; all are vulnerable,” said Margaret O’Mara, a professor at the University of Washington who specializes in the history of Silicon Valley. “For companies that have become extraordinarily successful doing one market-defining thing, it is hard to have a second act with something entirely different.”Because these new chatbots learn their skills by analyzing huge amounts of data posted to the internet, they have a way of blending fiction with fact. They deliver information that can be biased against women and people of color. They can generate toxic language, including hate speech.All of that could turn people against Google and damage the corporate brand it has spent decades building. As OpenAI has shown, newer companies may be more willing to take their chances with complaints in exchange for growth.Even if Google perfects chatbots, it must tackle another issue: Does this technology cannibalize the company’s lucrative search ads? If a chatbot is responding to queries with tight sentences, there is less reason for people to click on advertising links.“Google has a business model issue,” said Amr Awadallah, who worked for Yahoo and Google and now runs Vectara, a startup that is building similar technology. “If Google gives you the perfect answer to each query, you won’t click on any ads.”Sundar Pichai, Google’s CEO, has been involved in a series of meetings to define Google’s AI strategy, and he has upended the work of numerous groups inside the company to respond to the threat that ChatGPT poses, according to a memo and audio recording obtained by The New York Times. Employees have also been tasked with building AI products that can create artwork and other images, such as OpenAI’s DALL-E technology, which has been used by more than 3 million people.From now until a major conference expected to be hosted by Google in May, teams within Google’s research, Trust and Safety, and other departments have been reassigned to help develop and release new AI prototypes and products.As the technology advances, industry experts believe, Google must decide whether it will overhaul its search engine and make a full-fledged chatbot the face of its flagship service.(BEGIN OPTIONAL TRIM.)Google has been reluctant to share its technology broadly because, like ChatGPT and similar systems, it can generate false, toxic and biased information. LaMDA is available to only a limited number of people through an experimental app, AI Test Kitchen.Google sees this as a struggle to deploy its advanced AI without harming users or society, according to a memo viewed by the Times. In one recent meeting, a manager acknowledged that smaller companies had fewer concerns about releasing these tools but said Google must wade into the fray or the industry could move on without it, according to an audio recording of the meeting obtained by the Times.Other companies have a similar problem. Five years ago, Microsoft released a chatbot, called Tay, that spewed racist, xenophobic and otherwise filthy language and was forced to immediately remove it from the internet — never to return. In recent weeks, Meta took down a newer chatbot for many of the same reasons.Executives said in the recorded meeting that Google intended to release the technology that drove its chatbot as a cloud computing service for outside businesses and that it might incorporate the technology into simple customer support tasks. It will maintain its trust and safety standards for official products, but it will also release prototypes that do not meet those standards.It may limit those prototypes to 500,000 users and warn them that the technology could produce false or offensive statements. Since its release on the last day of November, ChatGPT — which can produce similarly toxic material — has been used by more than 1 million people.“A cool demo of a conversational system that people can interact with over a few rounds, and it feels mind-blowing? That is a good step, but it is not the thing that will really transform society,” Zoubin Ghahramani, who oversees the AI lab Google Brain, said in an interview with the Times last month, before ChatGPT was released. “It is not something that people can use reliably on a daily basis.”(END OPTIONAL TRIM.)Google has already been working to enhance its search engine using the same technology that underpins chatbots like LaMDA and ChatGPT. The technology — a “large language model” — is not merely a way for machines to carry on a conversation.Today, this technology helps the Google search engine highlight results that aim to directly answer a question you have asked. In the past, if you typed “Do aestheticians stand a lot at work?” into Google, it did not understand what you were asking. Now, Google correctly responds with a short blurb describing the physical demands of life in the skin care industry.Many experts believe Google will continue to take this approach, incrementally improving its search engine rather than overhauling it. “Google Search is fairly conservative,” said Margaret Mitchell, who was an AI researcher at Microsoft and Google, where she helped to start its Ethical AI team, and is now at the research lab Hugging Face. “It tries not to mess up a system that works.”Other companies, including Vectara and a search engine called Neeva, are working to enhance search technology in similar ways. But as OpenAI and other companies improve their chatbots — working to solve problems with toxicity and bias — this could become a viable replacement for today’s search engines. Whoever gets there first could be the winner.“Last year, I was despondent that it was so hard to dislodge the iron grip of Google,” said Sridhar Ramaswamy, who previously oversaw advertising for Google, including Search ads, and now runs Neeva. “But technological moments like this create an opportunity for more competition.”This article originally appeared in The New York Times.
You never know where a bit of unusual scientific research is going to lead. Consider a 2012 study about turtle shells. Researchers subjected the skeletal remains of pond sliders, diamondback terrapins, painted turtles and box turtles to incremental increases in mechanical forces and measured where and how the shells began to buckle.
(Circuits)Over the past three decades, a handful of products like Netscape’s web browser, Google’s search engine and Apple’s iPhone have truly upended the tech industry and made what came before them look like lumbering dinosaurs.Three weeks ago, an experimental chatbot called ChatGPT made its case to be the industry’s next big disrupter. It can serve up information in clear, simple sentences, rather than just a list of internet links. It can explain concepts in ways people can easily understand. It can even generate ideas from scratch, including business strategies, Christmas gift suggestions, blog topics and vacation plans.Although ChatGPT still has plenty of room for improvement, its release led Google’s management to declare a “code red.” For Google, this was akin to pulling the fire alarm. Some fear the company may be approaching a moment that the biggest Silicon Valley outfits dread — the arrival of an enormous technological change that could upend the business.For more than 20 years, the Google search engine has served as the world’s primary gateway to the internet. But with a new kind of chatbot technology poised to reinvent or even replace traditional search engines, Google could face the first serious threat to its main search business. One Google executive described the efforts as make or break for Google’s future.ChatGPT was released by an aggressive research lab called OpenAI, and Google is among the many other companies, labs and researchers that have helped build this technology. But experts believe the tech giant could struggle to compete with the newer, smaller companies developing these chatbots, because of the many ways the technology could damage its business.Google has spent several years working on chatbots and, like other Big Tech companies, has aggressively pursued artificial intelligence technology. Google has already built a chatbot that could rival ChatGPT. In fact, the technology at the heart of OpenAI’s chatbot was developed by researchers at Google.Called LaMDA, or Language Model for Dialogue Applications, Google’s chatbot received enormous attention in the summer when a Google engineer, Blake Lemoine, claimed it was sentient. This was not true, but the technology showed how much chatbot technology had improved in recent months.Google may be reluctant to deploy this new tech as a replacement for online search, however, because it is not suited to delivering digital ads, which accounted for more than 80% of the company’s revenue last year.“No company is invincible; all are vulnerable,” said Margaret O’Mara, a professor at the University of Washington who specializes in the history of Silicon Valley. “For companies that have become extraordinarily successful doing one market-defining thing, it is hard to have a second act with something entirely different.”Because these new chatbots learn their skills by analyzing huge amounts of data posted to the internet, they have a way of blending fiction with fact. They deliver information that can be biased against women and people of color. They can generate toxic language, including hate speech.All of that could turn people against Google and damage the corporate brand it has spent decades building. As OpenAI has shown, newer companies may be more willing to take their chances with complaints in exchange for growth.Even if Google perfects chatbots, it must tackle another issue: Does this technology cannibalize the company’s lucrative search ads? If a chatbot is responding to queries with tight sentences, there is less reason for people to click on advertising links.“Google has a business model issue,” said Amr Awadallah, who worked for Yahoo and Google and now runs Vectara, a startup that is building similar technology. “If Google gives you the perfect answer to each query, you won’t click on any ads.”Sundar Pichai, Google’s CEO, has been involved in a series of meetings to define Google’s AI strategy, and he has upended the work of numerous groups inside the company to respond to the threat that ChatGPT poses, according to a memo and audio recording obtained by The New York Times. Employees have also been tasked with building AI products that can create artwork and other images, such as OpenAI’s DALL-E technology, which has been used by more than 3 million people.From now until a major conference expected to be hosted by Google in May, teams within Google’s research, Trust and Safety, and other departments have been reassigned to help develop and release new AI prototypes and products.As the technology advances, industry experts believe, Google must decide whether it will overhaul its search engine and make a full-fledged chatbot the face of its flagship service.(BEGIN OPTIONAL TRIM.)Google has been reluctant to share its technology broadly because, like ChatGPT and similar systems, it can generate false, toxic and biased information. LaMDA is available to only a limited number of people through an experimental app, AI Test Kitchen.Google sees this as a struggle to deploy its advanced AI without harming users or society, according to a memo viewed by the Times. In one recent meeting, a manager acknowledged that smaller companies had fewer concerns about releasing these tools but said Google must wade into the fray or the industry could move on without it, according to an audio recording of the meeting obtained by the Times.Other companies have a similar problem. Five years ago, Microsoft released a chatbot, called Tay, that spewed racist, xenophobic and otherwise filthy language and was forced to immediately remove it from the internet — never to return. In recent weeks, Meta took down a newer chatbot for many of the same reasons.Executives said in the recorded meeting that Google intended to release the technology that drove its chatbot as a cloud computing service for outside businesses and that it might incorporate the technology into simple customer support tasks. It will maintain its trust and safety standards for official products, but it will also release prototypes that do not meet those standards.It may limit those prototypes to 500,000 users and warn them that the technology could produce false or offensive statements. Since its release on the last day of November, ChatGPT — which can produce similarly toxic material — has been used by more than 1 million people.“A cool demo of a conversational system that people can interact with over a few rounds, and it feels mind-blowing? That is a good step, but it is not the thing that will really transform society,” Zoubin Ghahramani, who oversees the AI lab Google Brain, said in an interview with the Times last month, before ChatGPT was released. “It is not something that people can use reliably on a daily basis.”(END OPTIONAL TRIM.)Google has already been working to enhance its search engine using the same technology that underpins chatbots like LaMDA and ChatGPT. The technology — a “large language model” — is not merely a way for machines to carry on a conversation.Today, this technology helps the Google search engine highlight results that aim to directly answer a question you have asked. In the past, if you typed “Do aestheticians stand a lot at work?” into Google, it did not understand what you were asking. Now, Google correctly responds with a short blurb describing the physical demands of life in the skin care industry.Many experts believe Google will continue to take this approach, incrementally improving its search engine rather than overhauling it. “Google Search is fairly conservative,” said Margaret Mitchell, who was an AI researcher at Microsoft and Google, where she helped to start its Ethical AI team, and is now at the research lab Hugging Face. “It tries not to mess up a system that works.”Other companies, including Vectara and a search engine called Neeva, are working to enhance search technology in similar ways. But as OpenAI and other companies improve their chatbots — working to solve problems with toxicity and bias — this could become a viable replacement for today’s search engines. Whoever gets there first could be the winner.“Last year, I was despondent that it was so hard to dislodge the iron grip of Google,” said Sridhar Ramaswamy, who previously oversaw advertising for Google, including Search ads, and now runs Neeva. “But technological moments like this create an opportunity for more competition.”This article originally appeared in The New York Times.
At some point in the next several weeks, a critical amount of Mars dust will cover the solar panels of NASA's InSight lander, which has been studying the red planet's crust, mantle, core and seismic activity since 2018. The batteries won't generate enough voltage to keep the spacecraft's instruments online. When that happens, the lander will power itself down and the mission will officially come to a close.
Elon Musk's own Twitter poll results say he should step down from the helm of the social network, in a referendum that Musk promised to follow after broad criticism of his stewardship of the company.
Hannah Chapman-Dutton, a Gonzaga Prep graduate and Spokane native, presented her research Thursday in Chicago on how snowfall in the summer affects the reflectivity of the sea ice surface in the Arctic.
As a major storm system approached the western United States Friday, a satellite 22,000 miles overhead captured images of its dynamic and moisture-laden pattern. That evening, the storm began making its way across the Pacific Northwest, unleashing an array of winds, snow, rain and freezing rain through the next day.
Spokane area residents will begin paying more for electricity and natural gas just days before Christmas after state regulators approved an earlier request by Avista Utilities to raise the rates for both.
The decision endorsing a more stringent and costly testing method for polychlorinated biphenyls, or PCBs, came three weeks after federal regulators announced the adoption of a stricter water quality standard that was rolled back during the Trump administration. Both rulings will factor into future permits allowing discharge of wastewater into the Spokane River and elsewhere in the state, a public battle that dates back decades and pits conservation and tribal interests against local municipalities and businesses over what is a safe amount of fish to eat harvested from the river.
With Orion safe back on Earth, the last and most important tests of the Artemis I mission have been completed, but there are still miles to travel and months of data sifting to go before NASA will target an Artemis II launch date.
An experimental cancer vaccine, combined with another drug, performed well in mid-stage testing against a deadly form of skin cancer in the first effort to show that a cancer vaccine using messenger RNA may be effective, two pharmaceutical companies announced Tuesday.
The Department of Energy plans to announce Tuesday that scientists have been able for the first time to produce a fusion reaction that creates a net energy gain – a major milestone in the decadeslong, multibillion-dollar quest to develop a technology that provides unlimited, cheap, clean power.
ORLANDO, Fla. — NASA chased down the Orion spacecraft after its record-breaking reentry into Earth’s atmosphere Sunday to conclude the Artemis I mission that lifted off from Kennedy Space Center more than three weeks ago.
A Washington company under contract at Richland's Hanford site has agreed to pay over $150,000 in back wages and interest to Hispanic workers the company allegedly refused to hire.
In an experiment that ticks most of the mystery boxes in modern physics, a group of researchers announced Wednesday that they had simulated a pair of black holes in a quantum computer and sent a message between them through a shortcut in space-time called a wormhole.
It was a cloudy day on Titan. That was clear on the morning of Nov. 5 when Sébastien Rodriguez, an astronomer at the Université Paris Cité, downloaded the first images of Saturn’s biggest moon taken by NASA’s James Webb Space Telescope. He saw what looked like a large cloud near Kraken Mare, a 1,000-foot-deep sea in Titan’s north polar region.
Near the end of 2020, as the covid-19 pandemic continued to rage, a few climate scientists and energy experts made a prediction. They estimated that emissions from fossil fuels - which had just plummeted thanks to the global pandemic - might never again reach the heights of 2019. Perhaps, they speculated, after over a century of ever more carbon dioxide flowing into the atmosphere, the world had finally reached "peak" emissions.They were wrong.According to a report released last month by the Global Carbon Project, carbon emissions from fossil fuels in 2022 are expected to reach 37.5 billion tons of carbon dioxide, the highest ever recorded. That means that despite the continued fallout from the coronavirus pandemic - which caused emissions to drop by over 5 percent in 2020 - CO2 emissions are back and stronger than ever.Scientists have reacted with dismay. For years before the pandemic, emissions appeared to be leveling off - sparking hope that the world was finally reaching the moment when emissions would start to come down. Then in 2020, "Covid came, there was a huge drop in emissions - and I guess we got a little overexcited," said Glen Peters, a climate scientist at the Center for International Climate Research in Oslo.Here's why researchers were wrong about emissions peaking - and what it means for the future - in three charts:- - -1. History repeats itselfFor the past century, carbon emissions have only ever fallen in one circumstance: crisis. When the 2008-2009 global financial crisis rocked the world's economic system, carbon emissions dropped by 1.4 percent. When the oil crises of 1973 and 1979 destabilized economies and caused people to wait in long lines for gasoline, emissions - previously on a steep upward climb - sputtered to a halt. And when the coronavirus pandemic locked billions of people indoors, the CO2 spilling into the atmosphere dropped by 5.2 percent - a record only matched by the aftermath of World War II.Economic crisis, of course, is not the way that nations want to cut their carbon emissions. And in all of these historical examples, the temporary drop in emissions didn't last long. After the financial crisis, emissions rebounded, growing by approximately 1.65 billion tons in a single year.In the immediate aftermath of the pandemic, some experts thought the world would take a different tack. Countries vowed to "build back better" and inject clean energy spending into their stimulus packages. But the result was not as green as might have been hoped. According to one analysis, only 6 percent of the stimulus money spent by G-20 nations went to areas that could cut emissions. And as people returned to flying, driving and making stuff, emissions bounced back.- - -2. Coal, coal, coalFor most of this century, the story of climate change has also been a story of coal. Coal is the world's dirtiest fossil fuel, releasing 820 metric tons of greenhouse gas emissions for every gigawatt of electricity produced. (Solar power, in contrast, releases about five metric tons for every gigawatt of electricity produced.)Before the pandemic, coal looked set for a long decline - which was part of why scientists and experts thought emissions might have reached their peak. But in the past couple of years, coal has made a resurgence. Russia's invasion of Ukraine has raised natural gas prices around the world, causing some European countries to lean more heavily on coal to keep energy prices low.Thanks to China's continued pandemic lockdowns, the world's largest economy hasn't accelerated its coal use quite as much as it could have - but India's use of the world's dirtiest fuel has skyrocketed. India's coal use is set to increase by 5 percent in 2022, on top of a 15 percent increase the year before. All of that means that in the past two years, emissions from burning coal have increased by almost a gigaton.- - -3. Developed versus developing countriesPart of the issue is that, while developed countries have seen their emissions decline over the past decades, that decline hasn't happened nearly fast enough to counterbalance the growth in emissions from developing countries. China's emissions have skyrocketed over the past 20 years, as the country has developed and lifted millions out of poverty. (Despite its high overall emissions, though, China still has lower per capita carbon emissions than the United States.) India's emissions are growing more slowly, but still growing."Fossil fuels are still the cheapest way to provide reliable electricity," said Ken Caldeira, a climate scientist at the Carnegie Institution for Science. (While wind and solar can be cheaper than fossil fuels in some cases, their intermittency - and the absence of cheap, big batteries - mean that it's difficult to build an entire electricity system out of just renewable energy.) "It's like Maslow's hierarchy of needs," Caldeira said. "Developing countries have to put climate concerns second to their economic concerns."For emissions to peak, therefore, richer countries would either need to cut their emissions much more rapidly - or assist developing countries to switch to lower-carbon fuel sources. And the latter option doesn't look particularly good. Conflicts over flows of money from richer to poorer countries have haunted the U.N. climate negotiation process for years, despite recent small victories at COP27."We've been unwilling to subsidize a massive green energy transition for ourselves," Caldeira said. "And the idea that then we're going to subsidize that for the Global South seems a little implausible."