SkyWatchMesh – UAP Intelligence Network

UAP Intelligence Network – Real-time monitoring of official UAP reports from government agencies and scientific institutions worldwide

Blog

  • Three options for wireless power in the enterprise

    Wireless connectivity is standard across corporate campuses, in warehouses and factories, and even in remote locations. But what about wireless power? Can we get rid of all cables?

    Many of us are already using wireless chargers for our cellphones and other devices. But induction chargers aren’t a complete solution since they require very close proximity between the device and the charging station. So, what can enterprises try? Some organizations are deploying midrange solutions that use radio signals to transmit power through the air, and others are even experimenting with laser power transmission.

    Here are three emerging options for wireless power in the enterprise, and top use cases to consider.

    Induction charging

    Induction charging is about more than saving users the two seconds it would take them to plug a cord into their device. It can also be used to power vehicles, such as factory vehicles, or even cars and buses. Also known as near-field charging, it’s the single largest sector of the global wireless power market, according to Coherent Market Insights, accounting for 89% of 2025’s $16.6 billion wireless power market. Of that, consumer electronics accounted for 74%.

    Detroit launched its first electric roadway in 2023, allowing vehicles to charge their batteries wirelessly when they park on, or drive over, that particular section of the road. It requires special equipment to be installed in the vehicle, and it can be pricy for individual cars, but it can be a useful option for buses or delivery vans. The city plans to add a second segment next year, reports the American Society of Civil Engineers.

    The first commercial user will be UPS, which will also add stationary wireless charging at its Detroit facility. “This innovative approach will revolutionize how we power our electric vehicles and drive fleet electrification forward,” said Dakota Semler, CEO and co-founder of electric vehicle manufacturer Xos, in a press release.

    Florida plans to open an electrified road in 2027, and, in California, UCLA is testing an in-road inductive charging system for campus buses that is planned to be in operation by 2028. The goal is to have the project ready in time for the 2028 Olympics Games in Los Angeles.

    Utah plans to add in-motion charging lanes to streets in the next ten years, and the first one scheduled to be installed later this year, as part of its electrified transportation action plan. A major impetus is the 2034 Winter Olympics, which will be held in Salt Lake City.

    Early adopters in Utah include Utah PaperBox and Boise Cascade’s Salt Lake distribution hub. There’s also an electrified roadway, currently in the pilot and development phase, at the Utah Inland Port, which will provide in-motion charging for freight vehicles. Construction of the world’s first one-megawatt wireless charging station has already begun at this facility, which will provide 30-minute fast charging to parked electric semi trucks.

    Europe is even further ahead. Sweden began working on the first electric road in 2018. In 2021, the one-mile stretch of electrified road was able to charge two commercial vehicles simultaneously, even though they had different battery systems and power requirements. In 2022, an electric bus began operating regularly on the road, charging while driving over it.

    The idea is that wireless in-motion charging will allow commercial vehicles to spend more time on the road and less time parked at charging stations — and less wasted time driving to and from the charging stations. It also allows vehicles to have smaller batteries and wider ranges. If the technology goes mainstream on public roads, drivers would be able to pay for the electricity they get in a way similar to how the E-Z Pass system works. But a more immediate application of the technology is the way that UPS is deploying — to charge up vehicles in a corporate facility.

    There are several vendors that offer this technology:

    • HEVO offers wireless charging pads for garages and parking lots for both residential and commercial markets.
    • Plugless Power is another company offering wireless charging for parked vehicles, and claims to have provided 1 million charge hours to its customers, which include Google and Hertz. It provided the first wireless charging stations for Tesla Model S cars, and its wireless charging system for driverless shuttlebuses was the first of its kind in Europe.
    • WAVE offers wireless charging system for electric buses, and its Salt Lake City depot can charge multiple vehicles automatically using inductive power. In addition to buses, other uses cases include ports such as the Port of Los Angeles, and warehouse and distribution. In warehouses, it can provide power to electric yard trucks, forklifts, and other equipment.
    • InductEV offers high-power, high-speed wireless charging for commercial vehicles such as city buses, auto fleets and industrial vehicles, with on-route wireless charging solutions deployed in North America and Europe. It was named one of Time magaaine’s best inventions of 2024. Seattle’s Sound Transit plans to have nearly half of its electric buses being charged by on-route wireless chargers from InductEV, and municial bus charging is already operational in Indianapolis, Martha’s Vineyard, and Oregon. The AP Moeller Maersk Terminal in Port Elizabeth, NJ is also using the company’s wireless chargers for its electric port tractors.

    Other companies offering wireless charging for industrial vehicles such as automated guided vehicles and material handling robots are Daihen, WiTricity, and ENRX.

    Meanwhile, cellphone charging pad-style wireless chargers also have plenty of business applications other than ease of use. Mojo Mobility, for example, offers charging systems designed to work in sterile medical environments.

    Ambient IoT and medium-range charging

    The most common type of ambient IoT is that powered by solar cells, where no power transmission is required at all. For example, ambient IoT is already reshaping agriculture, with solar-powered sensors placed in fields, greenhouses, and livestock areas, according to a September report from Omdia. Small devices can also be powered by motion or body heat.

    Transmitted wireless power, however, is more predictable and reliable and can work in a wider variety of environments — as long as the device is within range of the power transmitter or has a battery backup for when it’s not. Medium-range charging can work at a distance of a few inches to several yards or more. The less power the device requires, and the bigger its antenna, the longer the distance it can support.

    “It’s really pushing IoT to the next level,” says Omdia analyst Shobhit Srivastava.

    One popular use case is for sensors that are placed in locations where it’s not convenient to change batteries, he says, such as logistics. For example, Wiliot’s IoT Pixel is a postage stamp-sized sticker powered by radio waves that works at a range of up to 30 feet. Sold in reels, the price is as low as 10 cents a sensor when bought in bulk. The sensors can monitor temperature, location, and humidity and communicate this information to a company network via Bluetooth.

    Sensors such as these can be attached to pallets to track its location, says Srivastava. “People in Europe are very conscious about where their food is coming from and, to comply with regulations, companies need to have sensors on the pallets,” he says. “Or they might need to know that meat has been transported at proper temperatures.” The smart tags can just be slapped on pallets, he says. “This is a very cheap way to do this, even with millions of pallets moving around,” he says.

    The challenge, Srivastava says, is that when the devices are moving from trucks to logistics hubs, to warehouses, and to retail stores, “they need to connect to different technologies.”

    Plus, all this data needs to be collected and analyzed. Some sensor manufacturers also offer cloud-based platforms to do this — and charge extra for the additional services.

    One wireless power company, Energous, is doing just that, with an end-to-end ambient IoT platform consisting of wirelessly powered sensors, RF-based energy transmitters, and cloud-based monitoring software. Their newest product, the e-Sense Tag, was announced in June. The company has sold over 15,000 transmitters, says Giampaolo Marino, senior vice president of strategy and business development, and includes two Fortune 10 companies — one in retail IoT and one in logistics and transportation — among its customers.

    The new tags will cost around $5 each, though the price is subject to change as the product is commercialized, Marino says. It’s a bit pricier than the disposable tags that cost under $1 each. But they will last for years, he adds, and can be reprogrammed.

    “Three years ago, it was science fiction,” Marino says. “Today, it’s something we’re deploying.” It’s similar to how we went from cable internet to Wi-Fi everywhere, he says.

    One use case that we’re not seeing yet for this kind of medium-range power transmission is factory robots. “We are far away from that,” says Omdia’s Srivastava. “The use cases are for low-power devices only.”

    Similarly, smartphones are also energy-hungry devices with their big displays and other components that draw power, he says. “So, smartphones won’t be ambient powered in any hear future,” he says. “But small wearables, like wristbands in a hospital, can be ambient powered.”

    Like a warehouse, a hospital is a controlled physical location where power stations can be installed to provide power to the IoT devices, enabling a wide variety of applications, such as monitoring heart rates, respiration, and other key health metrics.

    Who’s in charge of wireless power networks?

    Is wireless power transmission a networking task that falls within the purview of the IT department, or is it handled on the operational or business unit level? According to Srivastava, that depends on the scale of the deployment. “It’s a smaller deployment, with one or two locations to track, it might just say with, say, the logistics team,” he says.

    But for larger deployments, with thousands of devices, ambient IoT is about more than just the power — there’s also the data transmission. “Then the network and security teams should be involved,” he says.

    Other issues that might come up beyond data security include electromagnetic interference and regulatory compliance for RF exposure.

    According to Omdia’s Srivstava and Energous, some of the notable vendors in the space are: Everactive (wireless ambient IoT devices); Wiliot (battery-free IoT pixel tags); HaiLa Technologies (low power wireless semiconductor); ONiO (self-powered, batteryless solutions); Atma.io from Avery Dennison atma.io (connected product cloud); EnOcean SmartStudio (sensor and data management); SODAQ (low power hardware platforms); Lightricity (integration of energy-harvesting solutions into IoT systems); SML Group (retail RFID solution integrators); Sequans (integration of cellular IoT connectivity into ambient IoT systems); Powercast (both inductive and radio power transmission); Ossia (RF power using the FCC-approved Cota wireless power standard); and Minew (Bluetooth bridge and gateway to support Wiliot IoT Pixels).

    Laser charging

    For longer distances, lasers are the way to go.

    Lasers can be used to power drones and other aerial craft or collect power from remote wind turbines. It can also be used to send power to cell towers in areas where power cables are impractical to deploy.

    In May, DARPA achieved a new wireless power transmission record, recording more than 800 watts of power at a distance of over five miles. This technology can even collect power from space-based solar collectors and beam it down to Earth. In fact, it’s a bit easier to beam power up and down since there’s less atmosphere to get in the way. Caltech’s Space Solar Power Project demonstrated this in 2023.

    In space, there are no day-night cycles, no seasons, and no cloud cover, meaning that solar panels can yield eight times more power than they can down on Earth. The idea is to collect power, transform it into electricity, convert it to microwaves, and transmit it down to where it’s needed, including locations that have no access to reliable power.

    In April, startup Aetherflux announced a $50 million funding round and plans to have its first low-Earth orbit test in 2026. China is currently working on a “Three Gorges dam in space” project, which will use super heavy rockets to create a giant solar power station in space, according to the South China Morning Post.

    The European Space Agency is expected to make a decision at the end of this year on proceeding with its own space-based solar power project, called SOLARIS.

    The same technology can also be used to transmit power from one satellite to another, and we’re already seeing a race to build a power grid in outer space.

    Star Catcher Industries plans to build a space-based network of solar power collectors that will concentrate solar energy and then transmit it to other satellites, meaning that companies will be able to send up more powerful satellites without expanding their physical footprint. On-the-ground testing was conducted earlier this year, and the first in-orbit test will take place in 2026.

    “Demand is growing exponentially for small satellites that can do more, from onboard processing to extended-duration operations,” said Chris Biddy, CEO at satellite manufacturer Astro Digital, which became Star Catcher’s first customer in September.


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → EHarmony

  • IBM unveils advanced quantum computer in Spain

    IBM unveils advanced quantum computer in Spain

    The Guipuzcoan city of San Sebastián is now operating the IBM Quantum System Two quantum computer, the most advanced of its kind that IBM has developed commercially to date. This is the first installation of its kind in Europe and the second in the world—the first was installed in the Japanese city of Kobe before the summer—not counting the equipment IBM has in its own quantum labs in New York.

    The IBM computer, which integrates a 156-qubit quantum chip (called Heron), is located in the new building of the Ikerbasque scientific foundation, which was also inaugurated on October 14, and from where the new equipment will be given access to more than 20 research centers and more than 30 companies, including the energy giant Iberdrola. All of them are attached to the Basque Quantum (BasQ) program promoted by the Basque Government, which includes an investment of more than 153 million euros in the promotion of quantum computing and science and an entire ecosystem designed to generate wealth and attract talent in the long term.

    “Today is not just the inauguration of an extraordinary machine,” said Juan Ignacio Pérez Iglesias, Minister of Science, Universities and Innovation of the Basque Government, at the presentation of the new computer, held in San Sebastián. Computerworld attended the presentation at the invitation of IBM. “Behind this announcement is a whole strategy, working with scientific, technological and business stakeholders, and with the help of IBM, to develop an entire ecosystem around quantum computing.” For Pérez Iglesias, “this is a founding moment.” A vision shared by the Lehendakari himself, Imanol Pradales, who emphasized at the event that the regional government’s focus on “cultivating and preserving science for decades has been key to IBM choosing us as a partner and traveling companion among the dozens of offers they had in quantum computing.”

    For the President of the Basque Government, the region’s quantum strategy (BasQ) “allows us to be a magnet for the generation of advanced knowledge and talent and also to align ourselves with the EU’s resilience and re-industrialization strategy.”

    Quantum computing, classical computing, and AI

    Jay Gambetta, current director of IBM Research and also present at the inauguration, emphasized that with this announcement, the company is “closer to the quantum advantage” it hopes to achieve by 2026, thanks, as Horacio Morell, president of IBM Spain, also pointed out, to the combination of new quantum computing with classical computing and artificial intelligence. “The combination of the three,” he asserted, “will allow us to tackle problems that have been intractable until now.”

    After indicating that, thanks to this project, “quantum computing is becoming a reality in Spain today, and the focus will now be on translating it into applications and greater competitiveness for industry,” Morell reviewed the Blue Giant’s roadmap for this emerging technology, which, in addition to pursuing the aforementioned quantum advantage next year, also aims to launch the first commercial quantum computer on a scale and capable of error correction, i.e., without the famous quantum “noise,” on the market in 2029. “In addition to placing us [as a country] on the quantum computing map, this project will be a legacy for our society,” he noted.

    Executives at the IBM-Euskadi Quantum Computational Center in San Sebastián, Spain

    IBM executives and officials from the Basque Government and regional councils in front of Europe’s first IBM Quantum System Two, located at the IBM-Euskadi Quantum Computational Center in San Sebastián, Spain.

    Irekia

    Adolfo Morais, Deputy Minister of Science and Innovation of the Basque Government, explained to the press present at the event that the use of the new quantum machine in combination with other classical supercomputing systems, which will be modernized shortly, and artificial intelligence solutions will surely be a reality in 2027. “At the Euskadi Quantum Computational Center of Ikerbasque, we are already thinking about setting up a more modern supercomputer to replace the current Hyperion next year, so that in two years we will be able to use the three types of technology in combination.”

    “We don’t envision quantum computing working independently, just as we don’t envision classical computing working independently in the future,” emphasized Mikel Díez, director of Quantum Computing at IBM Spain, confirming that the new computer works in conjunction with classical computing architecture. “The purpose of our quantum computing proposal is for it to work in conjunction with classical computing,” he emphasized.  

    The machine, he explained, is a modular architecture that, for now, has a single quantum chip, but more can be added. It takes up almost an entire room and must be kept at a temperature of -273 degrees Celsius, guaranteed by a pump cooling system. “It consumes kilowatts, not megawatts, because the qubits barely require any energy; in this sense, it’s very different from large classical supercomputers, which require much more energy,” Díez added.

    Practical applications of an emerging technology

    Quantum computing, combined with classical supercomputing and increasingly powerful AI tools, is expected to disrupt not only the academic world but also various productive sectors. As Mikel Díaz himself recalls in an interview with Computerworld, the Basque Government’s BasQ program contemplates three types of initiatives or projects that will work with quantum technology. “The first are related to the evolution of quantum technology itself: how to continue improving error correction, how to identify components of quantum computers, and how to optimize both these and the performance of these devices.”

    In this sense, as Díez himself acknowledges to this newspaper, “it’s true that the computer we’re inaugurating today in San Sebastián is a ‘noisy’ computer, and this, in some ways, still limits certain features.” Specifically, according to the IBM executive, the Quantum System Two has a rate of one error per thousand operations performed with a qubit. “Although it’s a very, very small rate, we’re aware that it can lead to situations where the result isn’t entirely guaranteed. What are we doing at this current moment? Post-processing the results we obtain and correcting possible errors.” Díez emphasizes that this will be done for the duration of this transition period until the arrival of a fault-tolerant quantum machine, as classical computers have been for years.

    Another type of project to which quantum computing will be applied, from a more scientific perspective, is the behavior of materials or time crystals. Finally, he explains, there is a third line related to the application of this technology in industry. “For example, we are exploring how to improve investment portfolios for the banking sector, optimize the energy grid, or explore logistics problems.”

    IBM Quantum System Two

    The Basque Government and IBM unveil the first IBM Quantum System Two in Europe at the IBM-Euskadi Quantum Computational Center in San Sebastián, Spain.

    IBM

    Currently, according to Adolfo Morais of the Basque Government, “50% of the quantum computing capacity is already being used by the scientific sector. We hope that the remaining 50% will be used by other scientific institutions, as well as by private companies and public bodies.” Along these lines, he added, both the three provincial councils and the Basque Executive have programs to accelerate use cases. “We not only want to attract Basque companies and entities, but the project has a global scope. In the coming weeks, we will announce how to apply for access to these quantum services,” he stated, emphasizing that the selection of projects will be based on their quality.

    Morais also emphasized that the collaborative framework between Ikerbasque and IBM has never been about “merely acquiring a device.” The contract, which amounts to €80 million—the figure initially announced was over €50 million—includes the acquisition of the quantum computer, “the most expensive component,” but also the implementation of other research and training initiatives. In fact, thanks to this agreement, 150 people have already been trained in this technology.

    This feature originally appeared on Computerworld Spain.


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → Aiper

  • Q&A: IBM’s Mikel Díez on hybridizing quantum and classical computing

    Q&A: IBM’s Mikel Díez on hybridizing quantum and classical computing

    The Basque city of San Sebastián is beginning to play an interesting leading role in the radically different field of quantum technology. The official launch of IBM Quantum System Two, the most advanced quantum computer of the ‘Blue Giant,’ happened in Donostia-San Sebastián. This infrastructure, which the technology company has implemented on the main campus of the Ikerbasque scientific foundation in Gipuzkoa, aims to solve problems that have remained unsolvable in combination with classical computing.

    The installation in San Sebastián is the first of its kind in Europe and the second worldwide, after Kobe, Japan. It stems from a 2023 strategic agreement between IBM and the Basque Government that brought the technology company’s advanced quantum machine to Spain. As Mikel Díez, director of quantum computing at IBM Spain, explains to Computerworld Spain, the system hybridizes quantum and classical computing to leverage the strengths of both. “At IBM, we don’t see quantum computing working alone, but rather alongside classical computing so that each does what it does best,” he says.

    Is IBM Quantum System Two, launched today in San Sebastián, fully operational?

    Yes, with today’s inauguration, IBM Quantum System Two is now operational. This quantum computer architecture is the latest we have at IBM and the most powerful in terms of technological performance. From now on, we will be deploying all the projects and initiatives we are pursuing with the ecosystem on this computer, as IBM’s participation in this program is not exclusively infrastructure-based, but also involves promoting joint collaboration, research, training, and other programs.

    There are academic experts who argue that there are no 100% quantum computers yet, and there’s a lot of marketing from technology companies. Is this new quantum computer real?

    Back in 2019, we launched the first quantum computer available and accessible on our cloud. More than 30,000 people connected that day; since then, we’ve built more than 60 quantum computers, and as we’ve evolved them, we currently have approximately 10 operating remotely from our cloud in both the United States and Europe. We provide access, from both locations, to more than 500,000 developers. Furthermore, we’ve executed more than 3 trillion quantum circuits. This quantum computer, the most advanced to date, is a reality, it’s tangible, and it allows us to explore problems that until now couldn’t be solved. However, classical infrastructure is also needed to solve these problems. We don’t envision quantum computing going it alone, but rather working alongside classical computing so that each does what it does best. What do quantum computers do best? Well, exploring information maps, and with absolutely demanding and exponential amounts of data.

    So IBM’s proposal, in the end, is a hybrid of classical computing with quantum computing.

    Correct. But, I repeat, quantum computers exist, we have them physically. In fact, the one we’re inaugurating today is the first of its kind in Europe, the second in the world.

    This hybrid proposal isn’t really a whim; it’s done by design. For example, when we need to simulate how certain materials behave to demand the best characteristics from them, this process is designed with an eye to what we want to simulate on classical computers and what we want to simulate on quantum computers, so that the sum of the two is greater than two. Another example is artificial intelligence, for which we must identify patterns within a vast sea of ​​data. This must be done from the classical side but also where it doesn’t reach, in the quantum side, so that the results of the latter converge throughout the entire artificial intelligence process. That’s the hybridization we’re seeking. In any case, I insist, in our IBM Quantum Network, we have more than 300 global organizations, private companies, public agencies, startups, technology centers, universities running real quantum circuits.

    And, one clarification. Back in 2019, when we launched our first quantum computer, with between 5 and 7 qubits, what we could attempt to do with that capacity could be perfectly simulated on an ordinary laptop. After the advances of these years, being able to simulate problems requiring more than 60 or 70 qubits with classical technology is not possible even on the largest classical computer in the world. That’s why what we do on our current computers, with 156 qubits, is run real quantum circuits. They’re not simulated: they run real circuits to help with artificial intelligence problems, optimization of simulation of materials, emergence of models all that kind of thing.

    What kinds of things? What projects are being promoted with this new infrastructure?

    The Basque Government’s BasQ program includes three types of initiatives or projects. The first are related to the evolution of quantum technology itself: how to continue improving error correction, how to identify components of quantum computers, and how to optimize both these and the performance of these devices. From a more scientific perspective, we are working on how to represent the behavior of materials so that we can improve the resistance of polymers, for example. This is useful in aeronautics to improve aircraft suspension. We are also working on time crystals, which, from a scientific perspective, seek to improve precision, sensor control, and metrology. Finally, a third line relates to the application of this technology in industry; for example, we are exploring how to improve the investment portfolio for the banking sector, how to optimize the energy grid , and how to explore logistics problems.

    What were the major challenges in launching the machine you’re inaugurating today? Why did you choose the Basque Country to implement your second Quantum System Two?

    Before implementing a facility of this type in a geographic area, we assess whether it makes sense based on four main pillars. First, whether the area has the capacity, technological expertise , talent and workforce, a research and science ecosystem, and, finally, an industrial fabric. I recall that IBM currently has more than 40 quantum innovation centers around the world, and this is one of them, with the difference that this is the first to have a machine in Europe.

    When evaluating the Basque Country option, we saw that the Basque Government already had supercomputing facilities, giving them technological experience in managing these types of facilities from a scientific perspective. They also had a scientific policy in place for decades, which, incidentally, had defined quantum physics as one of its major lines of work. They had long-standing talent creation, attraction, and retention policies with universities. And, finally, they had an industrial network with significant expertise in digitalization technologies, artificial intelligence, and industrial processes that require technology. In other words, the Basque Country option met all the requirements.

    He said the San Sebastián facility is the same as the one they’ve implemented in Japan. So what does IBM have in Germany?

    What we have in Germany is a quantum data center, similar to our cloud data centers , but focused on serving organizations that don’t have a dedicated computer on-site for their ecosystem. But in San Sebastián, as in Kobe (Japan), there’s an IBM System Two machine with a modular architecture and a 156-qubit Heron processor.

    Just as we have a quantum data center in Europe to provide remote service, we have another one in the United States, where we also have our quantum laboratory, which is where we are building, in addition to the current system (System Two), the one that we will have ready in 2029, which will be fault-tolerant.

    And this one we can say is a quantum computer at scale and fault-tolerant.

    Look, computers are computers, they’re real, and they’re real. The nuance may come from their capabilities, and it’s true that the one we’re inaugurating today in San Sebastián is a noisy computer, and this, in some ways, still limits certain features.

    IBM’s roadmap for quantum computing is as follows. First, by 2026, just around the corner, we hope to discover the quantum advantage, which will come from using existing real physical quantum computers alongside classical computers for specific processes. That is, we’re not just focused on whether it’s useful to have a quantum computer to run quantum circuits. Rather, as I mentioned before, by applying the capabilities of real quantum computers, alongside classical ones, we’ll gain an advantage in simulating new materials, simulating research into potential new drugs, optimizing processes for the energy grid , or for financial investment portfolios.

    The second major milestone will come in 2029, when we expect to have a fault-tolerant machine with 200 logical qubits commercially available. The third milestone is planned for 2033, when we will have a fault-tolerant machine with 2,000 logical qubits—that’s 10 times more logical qubits, meaning we’ll be able to perform processing at scale without the capacity limitations that exist now, and qubits without fault tolerance.

    You mentioned earlier that current quantum computers, including the one in San Sebastián, are noisy. How does this impact the projects you intend to support?

    What we, and indeed the entire industry, are looking for when we talk about the capacity of a quantum computer is processing speed, processing volume, and accuracy rate. The latter is related to errors. The computer we inaugurated here has a rate that is on the threshold of one error for every thousand operations we perform with a qubit. Although it’s a very, very small rate, we are aware that it can lead to situations where the result is not entirely guaranteed. What are we doing at this current time? Post-processing the results we obtain and correcting possible errors. Obviously, this is a transitional stage; what we want is for these errors to no longer exist by 2029, and for the results to no longer need to be post-processed to eliminate them. We want error correction to be automatic, as is the case with the computers we use today in our daily lives.

    But even today, with machines with these flaws, we are seeing successes: HSBC, using our quantum computing, has achieved a 34% gain in estimating the probability of automated government bond trades being closed.

    So the idea they have is to improve quantum computing along the way, right?

    Exactly. It’s the same as the project to go to the Moon. Although the goal was that, to go to the Moon, other milestones were discovered along the way. The same thing happens with quantum computing; the point is that you have to have a very clear roadmap, but IBM has one, at least until 2033.

    How do you view the progress of competitors like Google, Microsoft, and Fujitsu?

    In quantum computing, there are several types of qubits—the elements that store the information we want to use for processing—and we’re pursuing the option we believe to be the most robust: superconducting technology. Ours are superconducting qubits, and we believe this is a good choice because it’s the most widely recognized option in the industry.

    In quantum computing, it’s not all about the hardware, and in this sense, qubits and what we call the stack of all the levels that must be traversed to reach a quantum computer are important. The important thing is to look at how you are at all those levels, and there, well, there are competitors who work more on one part than another, but, once again, what industries and science value is a supplier having the complete stack because that’s what allows them to conduct experiments and advance applications.

    A question I’ve been asked recently is whether we’ll have quantum computers at home.

    So, will we have them?

    This is similar to what happens with electricity. Our homes receive 220 volts through the low-voltage meter; we don’t need high voltage or large transformers, which are found in large centers on the outskirts of cities. It’s the same in our area: we’ll have data centers with classical and quantum supercomputers, but we’ll be able to see the value in our homes when new materials, improved capabilities of artificial intelligence models, or even better drugs are discovered.

    Speaking of electricity, Iberdrola is one of the companies using the new quantum computer.

    There are various industrial players in the Basque Country’s quantum ecosystem, in addition, of course, to the scientific hub. Iberdrola recently joined the Basque Government’s BasQ program to utilize the capabilities of our computer and the entire ecosystem to improve its business processes and optimize the energy grid. They are interested in optimizing the predictive maintenance of all their assets, including home meters, wind turbines, and the various components in the energy supply chain.

    What other large companies will use the new computer?

    In the Basque ecosystem, we currently have more than 30 companies participating, along with technology and research centers. These are projects that have not yet been publicly announced because they are still under development, although I can mention some entities such as Tecnalia, Ikerlan, the universities of Deusto and Modragón, startups such as Multiverse and Quantum Match.

    How many people are behind the San Sebastián quantum computer project?

    There will be about 400 researchers in the building where the computer is located, although these projects involve many more people.

    Spain has a national quantum strategy in collaboration with the autonomous communities. Is it possible that IBM will bring another quantum machine to another part of the country?

    In principle, the one we have on our roadmap is the computer implemented for the Basque Government. In Andalusia, we have inaugurated a quantum innovation center, but it’s a project that doesn’t have our quantum computer behind it. That is, people in Andalusia will be able to access our European quantum data center [the one in Germany]. In any case, the Basque and Andalusian governments are in contact so that, should they need it, Andalusia can access the quantum computer in San Sebastián.

    What advantages does being able to access a quantum machine in one’s own country bring?

    When we talked earlier about how we want to hybridize quantum computing with classical computing, well, this requires having the two types of computers adjacent to each other, something that happens in the San Sebastián center, because the quantum computer is on one floor and the classical computer will be on the other floor. Furthermore, there are some processes that, from a performance and speed perspective, require the two machines to be very close together.

    And, of course, if you own the machine, you control access: who enters, when, at what speed obviously, in the case of quantum data centers like the one in Germany, you have to go through a queue, and there may be more congestion if people from many European countries enter at the same time.

    On the other hand, and this is relevant, at IBM we believe that having our quantum machines in a third-party facility raises the quality standards compared to having them only in our data centers or laboratories controlled by us.

    Beyond this, having a quantum machine in the country will generate an entire ecosystem around this technology and will be a focal point for attracting talent not only in San Sebastián but throughout Spain.

    Will there be another computer of this type in Europe soon?

    We don’t have a forecast for it at the moment, but who could have imagined four years ago that there would be a computer like this in Spain, and in the Basque Country in particular?

    We’ve talked a lot, but is there anything you’d like to highlight?

    As part of the Basque Government’s quantum program, we’ve trained more than 150 people from different companies, technology centers, startups, and more over the past two years. We’ve also created an ambassador program to help identify specific applications. We seek to reach across the entire ecosystem so that there are people with sufficient knowledge to understand where value is generated by using quantum technology.

    This interview originally appeared on Computerworld Spain.


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → hotel-deals

    FTC Disclosure: This post contains affiliate links. We may earn a commission if you purchase through these links at no additional cost to you. See our Affiliate Disclosure for details.

  • Q&A: IBM’s Mikel Díez on hybridizing quantum and classical computing

    Q&A: IBM’s Mikel Díez on hybridizing quantum and classical computing

    The Basque city of San Sebastián is beginning to play an interesting leading role in the radically different field of quantum technology. The official launch of IBM Quantum System Two, the most advanced quantum computer of the ‘Blue Giant,’ happened in Donostia-San Sebastián. This infrastructure, which the technology company has implemented on the main campus of the Ikerbasque scientific foundation in Gipuzkoa, aims to solve problems that have remained unsolvable in combination with classical computing.

    The installation in San Sebastián is the first of its kind in Europe and the second worldwide, after Kobe, Japan. It stems from a 2023 strategic agreement between IBM and the Basque Government that brought the technology company’s advanced quantum machine to Spain. As Mikel Díez, director of Quantum Computing at IBM Spain, explains to Computerworld, the system hybridizes quantum and classical computing to leverage the strengths of both. “At IBM, we don’t see quantum computing working alone, but rather alongside classical computing so that each does what it does best,” he says.

    Is IBM Quantum System Two, launched today in San Sebastián, fully operational?

    Yes, with today’s inauguration, IBM Quantum System Two is now operational. This quantum computer architecture is the latest we have at IBM and the most powerful in terms of technological performance. From now on, we will be deploying all the projects and initiatives we are pursuing with the ecosystem on this computer, as IBM’s participation in this program is not exclusively infrastructure-based, but also involves promoting joint collaboration, research, training, and other programs.

    There are academic experts who argue that there are no 100% quantum computers yet, and there’s a lot of marketing from technology companies. Is this new quantum computer real?

    Back in 2019, we launched the first quantum computer available and accessible on our cloud. More than 30,000 people connected that day; since then, we’ve built more than 60 quantum computers, and as we’ve evolved them, we currently have approximately 10 operating remotely from our cloud in both the United States and Europe. We provide access, from both locations, to more than 500,000 developers. Furthermore, we’ve executed more than 3 trillion quantum circuits. This quantum computer, the most advanced to date, is a reality, it’s tangible, and it allows us to explore problems that until now couldn’t be solved. However, classical infrastructure is also needed to solve these problems. We don’t envision quantum computing going it alone, but rather working alongside classical computing so that each does what it does best. What do quantum computers do best? Well, exploring information maps, and with absolutely demanding and exponential amounts of data.

    So IBM’s proposal, in the end, is a hybrid of classical computing with quantum computing.

    Correct. But, I repeat, quantum computers exist, we have them physically. In fact, the one we’re inaugurating today is the first of its kind in Europe, the second in the world.

    This hybrid proposal isn’t really a whim; it’s done by design. For example, when we need to simulate how certain materials behave to demand the best characteristics from them, this process is designed with an eye to what we want to simulate on classical computers and what we want to simulate on quantum computers, so that the sum of the two is greater than two. Another example is artificial intelligence, for which we must identify patterns within a vast sea of ​​data. This must be done from the classical side but also where it doesn’t reach, in the quantum side, so that the results of the latter converge throughout the entire artificial intelligence process. That’s the hybridization we’re seeking. In any case, I insist, in our IBM Quantum Network, we have more than 300 global organizations, private companies, public agencies, startups, technology centers, universities running real quantum circuits.

    And, one clarification. Back in 2019, when we launched our first quantum computer, with between 5 and 7 qubits, what we could attempt to do with that capacity could be perfectly simulated on an ordinary laptop. After the advances of these years, being able to simulate problems requiring more than 60 or 70 qubits with classical technology is not possible even on the largest classical computer in the world. That’s why what we do on our current computers, with 156 qubits, is run real quantum circuits. They’re not simulated: they run real circuits to help with artificial intelligence problems, optimization of simulation of materials, emergence of models all that kind of thing.

    What kinds of things? What projects are being promoted with this new infrastructure?

    The Basque Government’s BasQ program includes three types of initiatives or projects. The first are related to the evolution of quantum technology itself: how to continue improving error correction, how to identify components of quantum computers, and how to optimize both these and the performance of these devices. From a more scientific perspective, we are working on how to represent the behavior of materials so that we can improve the resistance of polymers, for example. This is useful in aeronautics to improve aircraft suspension. We are also working on time crystals, which, from a scientific perspective, seek to improve precision, sensor control, and metrology. Finally, a third line relates to the application of this technology in industry; for example, we are exploring how to improve the investment portfolio for the banking sector, how to optimize the energy grid , and how to explore logistics problems.

    What were the major challenges in launching the machine you’re inaugurating today? Why did you choose the Basque Country to implement your second Quantum System Two?

    Before implementing a facility of this type in a geographic area, we assess whether it makes sense based on four main pillars. First, whether the area has the capacity, technological expertise , talent and workforce, a research and science ecosystem, and, finally, an industrial fabric. I recall that IBM currently has more than 40 quantum innovation centers around the world, and this is one of them, with the difference that this is the first to have a machine in Europe.

    When evaluating the Basque Country option, we saw that the Basque Government already had supercomputing facilities, giving them technological experience in managing these types of facilities from a scientific perspective. They also had a scientific policy in place for decades, which, incidentally, had defined quantum physics as one of its major lines of work. They had long-standing talent creation, attraction, and retention policies with universities. And, finally, they had an industrial network with significant expertise in digitalization technologies, artificial intelligence, and industrial processes that require technology. In other words, the Basque Country option met all the requirements.

    He said the San Sebastián facility is the same as the one they’ve implemented in Japan. So what does IBM have in Germany?

    What we have in Germany is a quantum data center, similar to our cloud data centers , but focused on serving organizations that don’t have a dedicated computer on-site for their ecosystem. But in San Sebastián, as in Kobe (Japan), there’s an IBM System Two machine with a modular architecture and a 156-qubit Heron processor.

    Just as we have a quantum data center in Europe to provide remote service, we have another one in the United States, where we also have our quantum laboratory, which is where we are building, in addition to the current system (System Two), the one that we will have ready in 2029, which will be fault-tolerant.

    And this one we can say is a quantum computer at scale and fault-tolerant.

    Look, computers are computers, they’re real, and they’re real. The nuance may come from their capabilities, and it’s true that the one we’re inaugurating today in San Sebastián is a noisy computer, and this, in some ways, still limits certain features.

    IBM’s roadmap for quantum computing is as follows. First, by 2026, just around the corner, we hope to discover the quantum advantage, which will come from using existing real physical quantum computers alongside classical computers for specific processes. That is, we’re not just focused on whether it’s useful to have a quantum computer to run quantum circuits. Rather, as I mentioned before, by applying the capabilities of real quantum computers, alongside classical ones, we’ll gain an advantage in simulating new materials, simulating research into potential new drugs, optimizing processes for the energy grid , or for financial investment portfolios.

    The second major milestone will come in 2029, when we expect to have a fault-tolerant machine with 200 logical qubits commercially available. The third milestone is planned for 2033, when we will have a fault-tolerant machine with 2,000 logical qubits—that’s 10 times more logical qubits, meaning we’ll be able to perform processing at scale without the capacity limitations that exist now, and qubits without fault tolerance.

    You mentioned earlier that current quantum computers, including the one in San Sebastián, are noisy. How does this impact the projects you intend to support?

    What we, and indeed the entire industry, are looking for when we talk about the capacity of a quantum computer is processing speed, processing volume, and accuracy rate. The latter is related to errors. The computer we inaugurated here has a rate that is on the threshold of one error for every thousand operations we perform with a qubit. Although it’s a very, very small rate, we are aware that it can lead to situations where the result is not entirely guaranteed. What are we doing at this current time? Post-processing the results we obtain and correcting possible errors. Obviously, this is a transitional stage; what we want is for these errors to no longer exist by 2029, and for the results to no longer need to be post-processed to eliminate them. We want error correction to be automatic, as is the case with the computers we use today in our daily lives.

    But even today, with machines with these flaws, we are seeing successes: HSBC, using our quantum computing, has achieved a 34% gain in estimating the probability of automated government bond trades being closed.

    So the idea they have is to improve quantum computing along the way, right?

    Exactly. It’s the same as the project to go to the Moon. Although the goal was that, to go to the Moon, other milestones were discovered along the way. The same thing happens with quantum computing; the point is that you have to have a very clear roadmap, but IBM has one, at least until 2033.

    How do you view the progress of competitors like Google, Microsoft, and Fujitsu?

    In quantum computing, there are several types of qubits—the elements that store the information we want to use for processing—and we’re pursuing the option we believe to be the most robust: superconducting technology. Ours are superconducting qubits, and we believe this is a good choice because it’s the most widely recognized option in the industry.

    In quantum computing, it’s not all about the hardware, and in this sense, qubits and what we call the stack of all the levels that must be traversed to reach a quantum computer are important. The important thing is to look at how you are at all those levels, and there, well, there are competitors who work more on one part than another, but, once again, what industries and science value is a supplier having the complete stack because that’s what allows them to conduct experiments and advance applications.

    A question I’ve been asked recently is whether we’ll have quantum computers at home.

    So, will we have them?

    This is similar to what happens with electricity. Our homes receive 220 volts through the low-voltage meter; we don’t need high voltage or large transformers, which are found in large centers on the outskirts of cities. It’s the same in our area: we’ll have data centers with classical and quantum supercomputers, but we’ll be able to see the value in our homes when new materials, improved capabilities of artificial intelligence models, or even better drugs are discovered.

    Speaking of electricity, Iberdrola is one of the companies using the new quantum computer.

    There are various industrial players in the Basque Country’s quantum ecosystem, in addition, of course, to the scientific hub. Iberdrola recently joined the Basque Government’s BasQ program to utilize the capabilities of our computer and the entire ecosystem to improve its business processes and optimize the energy grid. They are interested in optimizing the predictive maintenance of all their assets, including home meters, wind turbines, and the various components in the energy supply chain.

    What other large companies will use the new computer?

    In the Basque ecosystem, we currently have more than 30 companies participating, along with technology and research centers. These are projects that have not yet been publicly announced because they are still under development, although I can mention some entities such as Tecnalia, Ikerlan, the universities of Deusto and Modragón, startups such as Multiverse and Quantum Match.

    How many people are behind the San Sebastián quantum computer project?

    There will be about 400 researchers in the building where the computer is located, although these projects involve many more people.

    Spain has a national quantum strategy in collaboration with the autonomous communities. Is it possible that IBM will bring another quantum machine to another part of the country?

    In principle, the one we have on our roadmap is the computer implemented for the Basque Government. In Andalusia, we have inaugurated a quantum innovation center, but it’s a project that doesn’t have our quantum computer behind it. That is, people in Andalusia will be able to access our European quantum data center [the one in Germany]. In any case, the Basque and Andalusian governments are in contact so that, should they need it, Andalusia can access the quantum computer in San Sebastián.

    What advantages does being able to access a quantum machine in one’s own country bring?

    When we talked earlier about how we want to hybridize quantum computing with classical computing, well, this requires having the two types of computers adjacent to each other, something that happens in the San Sebastián center, because the quantum computer is on one floor and the classical computer will be on the other floor. Furthermore, there are some processes that, from a performance and speed perspective, require the two machines to be very close together.

    And, of course, if you own the machine, you control access: who enters, when, at what speed obviously, in the case of quantum data centers like the one in Germany, you have to go through a queue, and there may be more congestion if people from many European countries enter at the same time.

    On the other hand, and this is relevant, at IBM we believe that having our quantum machines in a third-party facility raises the quality standards compared to having them only in our data centers or laboratories controlled by us.

    Beyond this, having a quantum machine in the country will generate an entire ecosystem around this technology and will be a focal point for attracting talent not only in San Sebastián but throughout Spain.

    Will there be another computer of this type in Europe soon?

    We don’t have a forecast for it at the moment, but who could have imagined four years ago that there would be a computer like this in Spain, and in the Basque Country in particular?

    We’ve talked a lot, but is there anything you’d like to highlight?

    As part of the Basque Government’s quantum program, we’ve trained more than 150 people from different companies, technology centers, startups, and more over the past two years. We’ve also created an ambassador program to help identify specific applications. We seek to reach across the entire ecosystem so that there are people with sufficient knowledge to understand where value is generated by using quantum technology.

    This interview originally appeared on Computerworld Spain.


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → Surfshark

  • Comet 3I/ATLAS shows signs of water

    Since its arrival this summer, scientists have been racing to understand as much as possible about 3I/ATLAS, the third recorded visitor from outside our solar system. A breakthrough study published on Sept. 30 has scientists exclaiming, “OH!”: the first detection of hydroxyl gas (OH) from an interstellar object. The finding provides a crucial touchstone forContinue reading “Comet 3I/ATLAS shows signs of water”

    The post Comet 3I/ATLAS shows signs of water appeared first on Astronomy Magazine.


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → HomeFi

  • Nvidia: Latest news and insights

    Nvidia: Latest news and insights

    More processor coverage on Network World:
    Intel news and insights |
    AMD news and insights

    With its legacy of innovation in GPU technology, Nvidia has become a dominant force in the AI market.  Nvidia’s partners read like a technology who’s who list – e.g., AWS, Google Cloud, Microsoft Azure, Dell, HPE – and also crosses into vertical industries such as healthcare, finance, automotive, and manufacturing.

    From its gaming roots, Nvidia’s GPUs have evolved to power breakthroughs in scientific simulations, data analysis, and machine learning.

    Follow this page for the latest news, analysis, and features on Nvidia’s advancements and their impact on enterprise transformation.

    Nvidia news and analysis

    Nvidia’s DGX Spark desktop supercomputer is on sale now, but hard to find

    October 15, 2025: Nvidia’s “personal AI supercomputer,” the DGX Spark, may run fast but it’s been slow getting here. It finally went on sale today, five months later than the company initially promised, and early units are hard to find: 

    Inside Nvidia’s ‘grid-to-chip’ vision: How Vera Rubin and Spectrum-XGS advanceAI giga-factories

    October 13, 2025: Nvidia will be front-and-center at this week’s Global Summit for members of the Open Compute Project. The company is making announcements on several fronts, including the debut of Vera Rubin MGX, its next-gen architecture fusing CPUs and GPUs, and Spectrum-XGS Ethernet, a networking fabric designed for “giga-scale” AI factories.

    Nvidia and Fujitsu team for vertical industry AI projects

    October 6, 2025: Nvidia has partnered with Fujitsu to collaborate on vertical industry-specific artificial intelligence projects. The partnership will focus on co-developing and delivering an AI agent platform tailored for industry-specific agents in sectors such as healthcare, manufacturing, and robotics.

    Nvidia and OpenAI open $100B, 10 GW data center alliance

    September 23, 2025: OpenAI and Nvidia will create a strategic partnership to deploy at least 10 gigawatts of Nvidia systems for OpenAI’s next-generation AI infrastructure.The first phase is expected to come online in the second half of 2026 using Nvidia’s Vera Rubin CPU/GPU combination platform to train and run new models.

    Who wins/loses with the Intel-Nvidia union?

    September 22, 2025: Nvidia is dipping into its $56 billion bank account to acquire a 5% stake in Intel for $5 billion, making it the second largest shareholder of Intel stock after the federal government’s recent investment. The deal provides Nvidia greater access to the x86 ecosystem, important for the enterprise data center market, and provides Intel with access to GPUs that have demand and can move their CPU products as well.

    Nvidia reportedly acquires Enfabrica CEO and chip technology license

    September 19, 2025: Nvidia has hired away the CEO and other staff of chip interconnect maker Enfabrica, and licensed its core technologies in a deal worth over $900 million, Behind the move is demand for computing capacity to power generative AI for the likes of OpenAI, Anthropic, Mistral, AWS, Microsoft, and Google.

    September 18, 2025: Intel will collaborate with Nvidia to design CPUs with Nvidia’s NVLink high-speed chip interconnect. Nvidia and Intel also agreed to “jointly develop multiple generations of custom data center and PC products,” they said in a joint statement.

    China’s strike on Nvidia threatens global AI supply chains, sparking enterprise concerns

    September 16, 2025: China has accused Nvidia of breaching its anti-monopoly law, a move that could disrupt the chipmaker’s global operations and heighten risks for enterprises dependent on its GPUs as US-China trade tensions escalate.

    Nvidia rolls out new GPUs for AI inferencing, large workloads

    September 9, 2025: Nvidia has taken the wraps off a new purpose-built GPU along with a next-generation platform specifically targeted at massive-context processing as well as token software coding and generative video.       

    Cadence adds Nvidia to digital twin tool for data center design

    September 9, 2025: Cadence has updated to its Cadence Reality Digital Twin Platform library with the addition of digital twins for Nvidia’s DGX SuperPOD with DGX GB200 systems.

    Nvidia networking roadmap: Ethernet, InfiniBand, co-packaged optics will shape data center of the future

    September 4, 2025: Nvidia’s networking roadmap is based on data centers evolution into a new unit of computing, from a focus on CPUs to GPUs as the primary computing units and from the distribution of functions across different components to support the infrastructure for AI workload

    Nvidia’s new computer gives AI brains to robots

    August 25, 2025: Nvidia CEO Jensen Huang sees a future where billions of robots serve humans, bringing in trillions of dollars in revenue for the company. To meet that goal, Nvidia on Monday unveiled a new computing device that will go into high-performing robots that could then try to replicate human behavior.

    Nvidia turns to software to speed up its data center networking hardware for AI

    August 22, 2025: Nvidia wants to make long-haul GPU-to-GPU communication over Ethernet faster and more reliable, and hopes to achieve that with its new Spectrum-XGS algorithms, software protocols baked into Nvidia’s latest Ethernet gear. .

    Nvidia: ‘Graphics 3.0’ will drive physical AI productivity

    August 15, 2025: Nvidia has floated the idea of “Graphics 3.0” with the hope of making AI-generated graphics central to physical productivity. The concept revolves around graphics created by genAI tools. Nvidia say AI-generated graphics could help in training robots to do their jobs in the physical world or by helping AI assistants automate the creation of equipment and structures.

    Nvidia launches Blackwell-powered RTX Pro GPUs for compact AI workstations

    August 12, 2025: Nvidia announced two new professional GPUs, the RTX Pro 4000 Small Form Factor (SFF) and the RTX Pro 2000. Built on its Blackwell architecture, Nvidia’s new GPUs aim to deliver powerful AI capabilities in compact desktop and workstation deployments.

    Nvidia’s new genAI model helps robots think like humans

    August 11, 2025: Nvidia has developed a genAI model to help robots make human-like decisions by analyzing surrounding scenes. The Cosmos Reason model in robots can take in information from video and graphics input, analyze the data, and use its understanding to make decisions.

    Nvidia patches critical Triton server bugs that threaten AI model security

    August 5, 2025: A surprising attack chain in Nvidia’s Triton Inference Server, starting with a seemingly minor memory-name leak, could allow full remote server takeover without user authentication.

    China demands ‘security evidence’ from Nvidia over H20 chip backdoor fears

    August 4, 2025: China escalated pressure on Nvidia with the state-controlled People’s Daily publishing an opinion piece titled “Nvidia, how can I trust you?” — a day after regulators summoned company officials over alleged security vulnerabilities in H20 artificial intelligence chips.

    Nvidia to restart H20 exports to China, unveils new export-compliant GPU

    July 15, 2025: Nvidia will restart H20 AI chip sales to China and release a new GPU model compliant with export rules, a move that could impact global AI hardware strategies for enterprise IT teams. Nvidia has applied for US approval to resume sales and says that the government has indicated licenses will be granted and deliveries could begin soon.

    Nvidia GPUs are vulnerable to Rowhammer attacks

    July 15, 2025: Nvidia has issued a security reminder to application developers, computer manufacturers, and IT leaders that modern memory chips in graphic processors are potentially susceptible to so-called Rowhammer exploits after Canadian university researchers proved that an Nvidia A6000 GPU could be successfully compromised with a similar attack.

    Nvidia hits $4T market cap as AI, high-performance semiconductors hit stride

    July 11, 2025: Nvidia became the first publicly traded company to surpass a $4 trillion market capitalization value, 13 months after surpassing the $3 trillion mark. This makes Nvidia the world’s most valuable company ahead of Apple and Microsoft.

    New Nvidia technology provides instant answers to encyclopedic-length questions

    Jul 8, 2025: Have a question that needs to process an encyclopedia-length dataset? Nvidia says its new technique can answer it instantly. Built leveraging the company’s Blackwell processor’s capabilities, the new “Helix Parallelism” method allows AI agents to process millions of words — think encyclopedia-length — and support up to 32x more users at a time.

    Nvidia doubles down on GPUs as a service

    July 8, 2025: Nvidia’s recent initiative to dive deeper into the GPU-as-a-service (GPUaaS) model marks a significant and strategic shift that reflects an evolving landscape within the cloud computing market. 

    Nvidia, Perplexity to partner with EU and Middle East AI firms to build sovereign LLMs

    June 12, 2025: Nvidia and AI search firm Perplexity said they are joining hands with model builders and cloud providers across Europe and the Middle East to refine sovereign large-language models (LLMs) and accelerate enterprise AI uptake in local industries.

    Nvidia: ‘Sovereign AI’ will change digital work

    June 11, 2025: Nvidia executives think sovereign AI has the potential to change digital work as generative AI (genAI) aligns with national priorities and local regulations.

    AWS cuts prices of some EC2 Nvidia GPU-accelerated instances

    June 9, 2025: AWS has reduced the prices of some of its Nvidia GPU-accelerated instances to attract more AI workloads while competing with rivals, such as Microsoft and Google, as demand for GPUs and the cost of securing them continues to grow.

    Nvidia aims to bring AI to wireless

    June 6, 2025: Nvidia hopes to maximize RAN infrastructure use (traditional networks average a low 30% to 35%), use AI to rewrite the air interface, and enhance performance and efficiency through radio signal processing. The longer-term goal is to seamlessly process AI traffic at the network edge to create new monetization opportunities for service providers.

    Oracle to spend $40B on Nvidia chips for OpenAI data center in Texas

    May 26, 2025: Oracle is reportedly spending about $40 billion on Nvidia’s high-performance computer chips to power OpenAI’s new data center in Texas, marking a pivotal shift in the AI infrastructure landscape that has significant implications for enterprise IT strategies.

    Nvidia eyes China rebound with stripped-down AI chip tailored to export limits

    May 26, 2025: Nvidia plans to launch a lower-cost AI chip for China in June, aiming to protect market share under the US export controls and signal a broader shift toward affordable, segmented products that could impact global enterprise AI spending.

    Nvidia introduces ‘ridesharing for AI’ with DGX Cloud Lepton

    May 19, 2025: Nvidia introduced DGX Cloud Lepton, an AI-centric cloud software program that makes it easier for AI factories to rent out their hardware to developers who wish to access performant compute globally.

    May 19, 2025: Nvidia kicked off the Computex systems hardware tradeshow with the news it has opened the NVLink interconnect technology to the competition with the introduction of NVLink Fusion. NVLink is a high-speed interconnect born out of its Mellanox networking group which lets multiple GPUs in a system or rack share compute and memory resources, thus making many GPUs appear to the system as a single processor.

    AMD, Nvidia partner with Saudi startup to build multi-billion dollar AI service centers

    May 15, 2025: As part of the avalanche of business deals coming from President Trump’s Middle East tour, both AMD and Nvidia have struck multi-billion dollar deals with an emerging Saudi AI firm. The deals served as the coming out party for Humain, a state-backed artificial intelligence (AI) company that operates under the Kingdom’s Public Investment Fund (PIF) and is chaired by Crown Prince Mohammed bin Salman. 

    Nvidia, ServiceNow engineer open-source model to create AI agents

    May 6, 2025: Nvidia and ServiceNow have created an AI model that can help companies create learning AI agents to automate corporate workloads..The open-source Apriel model, available generally in the second quarter on HuggingFace, will help create AI agents that can make decisions around IT, human resources and customer-service functions.

    Nvidia AI supercluster targets agents, reasoning models on Oracle Cloud

    April 29, 2025: The move marks the first wave of liquid-cooled Nvidia GB200 NVL72 racks in OCI data centers, involving thousands of Nvidia Grace CPUs and Blackwell GPUs. 

    Nvidia says NeMo microservices now generally available

    April 23, 2025: Nvidia announced the general availability of neural module (NeMo) microservices, a modular platform for building and customizing gen AI models and AI agents.NeMo microservices integrate with partner platforms to provide features including prompt tuning, supervised fine-tuning, and knowledge retrieval tools.

    Nvidia expects ban on chip exports to China to cost $5.5B

    April 16, 2025: Nvidia now expects new US government restrictions on exports of its H20 chip to China will cost the company as much as $5.5 billion.

    Incomplete patching leaves Nvidia, Docker exposed to DOS attacks

    April 15, 2025: A critical race condition bug affecting the Nvidia Container Toolkit, which received a fix in September, might still be open to attacks owing to incomplete patching.

    Nvidia lays out plans to build AI supercomputers in the US

    April 14, 2025: There was mixed reaction from industry analysts over an announcement that Nvidia plans to produce AI supercomputers entirely in the US. The company said in a blog post that, together with its manufacturing partners, it has commissioned more than one million square feet (92,900 square meters) of manufacturing space to build and test Nvidia Blackwell chips in Arizona and AI supercomputers in Texas.

    Potential Nvidia chip shortage looms as Chinese customers rush to beat US sales ban

    April 2, 2025: The AI chip shortage could become even more dire as Chinese customers are purportedly looking to hoard Nvidia chips ahead of a proposed US sales ban. According to inside sources, Chinese companies including ByteDance, Alibaba Group, and Tencent Holdings have ordered at least $16 billion worth of Nvidia’s H20 server chips for running AI workloads in just the first three months of this year.

    Nvidia’s Blackwell raises the bar with new MLPerf Inference V5.0 results

    April 2, 2025: Nvidia released a set of MLPerf Inference V5.0 benchmark results for its Blackwell GPU, the successor to Hopper, saying that its GB200 NVL72 system, a rack-scale offering designed for AI reasoning, set a series of performance records.

    5 big takeaways from Nvidia GTC

    March 25, 2025: Now that the dust has settled from Nvidia’s GTC 2025, a few industry experts weighed in on some core big picture developments from the conference. Here are five of their top observations.

    Nvidia wants to be a one-stop enterprise technology shop

    March 24, 2025: After last week’s Nvidia GTC 2025 event, a new, fuller picture of the vendor emerged. Analysts agree that Nvidia is not just a graphics chip provider anymore. It’s a full-stack solution provider, and GPUs are just one of many parts.

    Nvidia launches AgentIQ toolkit to connect disparate AI agents

    March 21, 2025: As enterprises look to adopt agentic AI to boost the efficiency of their applications, Nvidia introduced a new open-source software library — AgentIQ toolkit — to help developers connect disparate agents and agent frameworks. The toolkit, according to Nvidia, packs in a variety of tools, including ones to weave in RAG, search, and conversational UI into agentic AI applications.

    Nvidia launches research center to accelerate quantum computing breakthrough

    March 21, 2025: In a move to help accelerate the timeline for practical, real-world quantum applications, Nvidia is establishing the Nvidia Accelerated Quantum Research Center. “Quantum computing will augment AI supercomputers to tackle some of the world’s most important problems,” Nvidia CEO Jensen Huang said.

    Nvidia, xAI and two energy giants join genAI infrastructure initiative

    March 19, 2025: An industry generative artificial intelligence (genAI) alliance, the AI Infrastructure Partnership (AIP), on Wednesday announced that xAI, Nvidia, GE Vernova, and NextEra Energy were joining BlackRock, Microsoft, and Global Infrastructure Partners as members.

    IBM broadens access to Nvidia technology for enterprise AI

    March 19, 2025: New collaborations between IBM and Nvidia have yielded a content-aware storage capability for IBM’s hybrid cloud infrastructure, expanded integration between watsonx and Nvidia NIM, and AI services from IBM Consulting that use Nvidia Blueprints.

    Nvidia’s silicon photonics switches bring better power efficiency to AI data centers

    March 19, 2025: Amid the flood of news from Nvidia’s annual GTC event, one item stood out. Nvidia introduced new silicon photonics network switches that integrate network optics into the switch using a technique called co-packaged optics (CPO), replacing traditional external pluggable transceivers. While Nvidia alluded to its new switches providing a cost savings, the primary benefit is to reduce power consumption with an improvement in network resiliency.

    What is Nvidia Dynamo and why it matters to enterprises?

    March 19, 2025: Chipmaker Nvidia has released a new open-source inferencing software — Dynamo, at its GTC 2025 conference, that will allow enterprises to increase throughput and reduce cost while using large language models on Nvidia GPUs.

    Nvidia, xAI and two energy giants join genAI infrastructure initiative

    March 19, 2025:  AI Infrastructure Partnership (AIP) announced that xAI, Nvidia, GE Vernova, and NextEra Energy joined the AIP. But given that no financial commitments or any other details were released, will it make a difference?

    HPE, Nvidia broaden AI infrastructure lineup

    March 19, 2025: HPE news from Nvidia GTC includes a new Private Cloud AI developer kit, Nvidia AI blueprints, GPU optimization capabilities, and servers built with Nvidia Blackwell Ultra and Blackwell architecture.

    Cisco, Nvidia team to deliver secure AI factory infrastructure

    March 18, 2025: Cisco and Nvidia have expanded their partnership to create their most advanced AI architecture package to date, designed to promote secure enterprise AI networking.

    Nvidia’s ‘hard pivot’ to AI reasoning bolsters Llama models for agentic AI

    March 18, 2025: The company has post-trained its new Llama Nemotron family of reasoning models to improve multistep math, coding, reasoning, and complex decision-making. The enhancements aim to provide developers and enterprises with a business-ready foundation for creating AI agents that can work independently or as part of connected teams.

    Nvidia details its GPU, CPU, and system roadmap for the next three years

    March 18, 2025: Nvidia CEO Jensen Huang shared previously unreleased specifications for its Rubin graphics processing unit (GPU), due in 2026, the Rubin Ultra coming in 2027, and announced the addition of a new GPU called Feynman to the mix for 2028.

    Oracle, Nvidia partner to add AI software into OCI services

    March 18, 2025: Nvidia’s AI Enterprise stack will be available natively through the OCI Console and will be available anywhere in OCI’s distributed cloud while providing enterprises access to over 160 AI tools for training and inference, including NIM microservices, the companies said in a joint statement at Nvidia’s annual GTC conference.

    Nvidia GTC 2025: What to expect from the AI leader

    March 3, 2025: Last year, Nvidia’s GTC 2024 grabbed headlines with the introduction of the Blackwell architecture and the DGX systems powered by it. With Nvidia GTC 2025 right around the corner, the tech world is eager to see what Nvidia – and its partners and competitors – will unveil next. 

    Cisco, Nvidia expand AI partnership to include Silicon One technology

    February 25, 2025; Cisco and Nvidia have expanded their collaboration to support enterprise AI implementations by tying Cisco’s Silicon One technology to Nvidia’s Ethernet networking platform. The extended agreement is designed to offer customers yet another way to support AI workloads across the data center and strengthens both companies’ strategies to expand the role of Ethernet networking for AI in the enterprise.

    Nvidia forges healthcare partnerships to advance AI-driven genomics, drug discovery

    February 14, 2025: Through new partnerships with industry leaders, Nvidia aims to advance practical use cases for AI in healthcare and life sciences. It’s a logical move: Healthcare has the most significant upside, particularly in patient care, among all the industries applicable to AI. 

    Nvidia partners with cybersecurity vendors for real-time monitoring

    February 12, 2025: Nvidia partnered with leading cybersecurity firms to provide real-time security protection using its accelerator and networking hardware in combination with its AI software. Under the agreement, Nvidia will provide integration of its BlueField and Morpheus hardware with cyber defenses software from Armis, Check Point Software Technologies, CrowdStrike, Deloitte and World Wide Technology .

    Nvidia claims near 50% boost in AI storage speed

    February 7, 2025: Nvidia is touting a near 50% improvement in storage read bandwidth thanks to intelligence in its Spectrum-X Ethernet networking equipment, according to the vendor’s technical blog post. Spectrum-X is a combination of the company’s Spectrum-4 Ethernet switch and BlueField-3 SuperNIC smart networking card, which supports RoCE v2 for remote direct memory access (RDMA) over Converged Ethernet.

    Nvidia unveils preview of DeepSeek-R1 NIM microservice

    February 3, 2025: The chipmaker stock plummeted 17% after Chinese AI developer DeepSeek unveiled its DeepSeek-R1 LLM. Last week, Nvidia announced the DeepSeek-R1 model is now available as a preview Nvidia inference microservice (NIM) on build.nvidia.com.

    Nvidia unveils preview of DeepSeek-R1 NIM microservice

    January 31, 2025: Nvidia stock plummeted 17% after Chinese AI developer, DeepSeek, unveiled its DeepSeek-R1 LLM. Later the same week, the chipmaker turned around and announced the DeepSeek-R1 model is available as a preview Nvidia inference microservice (NIM) on build.nvidia.com.

    Nvidia intros new guardrail microservices for agentic AI

    January 16, 2025: Nvidia added new Nvidia inference microservices (NIMs) for AI guardrails to its Nvidia NeMo Guardrails software tools. The new microservices aim to help enterprises improve accuracy, security, and control of agentic AI applications, addressing a key reservation IT leaders have about adopting the technology.

    Nvidia year in review

    January 10, 2025: Last year was Nvidia’s year. Its command of mindshare and market share was unequaled among tech vendors. Here’s a recap of some of the key Nvidia events of 2024 that highlight just how powerful the world’s most dominant chip player is.

    Nvidia launches blueprints to help jumpstart AI projects

    January 8, 2025: Nvidia recently issued designs for AI factories after hyping up the idea for several months. Now it has come out with AI blueprints, essentially prebuilt templates that give developers a jump start on creating AI systems.

    Nvidia’s Project DIGITS puts AI supercomputing chips on the desktop

    January 6, 2025: Nvidia is readying a tiny desktop device called Project DIGITS, a “personal AI supercomputer” with a lightweight version of the Grace Blackwell platform found in its most powerful servers; it’s aimed at data scientists, researchers, and students who will be able to prototype, tune, and run large genAI models.

    Nvidia unveils generative physical AI platform, agentic AI advances at CES

    January 6, 2025: At CES in Las Vegas, Nvidia trumpeted a slew of AI announcements, with an emphasis on generative physical AI that promises a new revolution in factory and warehouse automation. “AI requires us to build an entirely new computing stack to build AI factories, accelerated computing at data center scale,” Rev Lebaredian, vice president of omniverse and simulation technology at Nvidia.

    Verizon, Nvidia team up for enterprise AI networking

    December 30, 2024: Verizon and Nvidia partnered to build AI services for enterprises that run workloads over Verizon’s 5G private network. The new offering, 5G Private Network with Enterprise AI, will run a range of AI applications and workloads over Verizon’s private 5G network with Mobile Edge Compute (MEC). MEC is a colocated infrastructure that is a part of Verizon’s public wireless network, bringing compute and storage closer to devices and endpoints for ultra-low latency.

    Nvidia’s Run:ai acquisition waved through by EU

    December 20, 2024: Nvidia will face no objections to its plan to acquire Israeli AI orchestration software vendor Run:ai Labs in Europe, after the European Commission gave the deal its approval today. But Nvidia may not be out of the woods yet. Competition authorities in other markets are closely examining the company’s acquisition strategy.

    China launches anti-monopoly probe into Nvidia amid rising US-China chip tensions

    December 10, 2024: China has initiated an investigation into Nvidia over alleged violations of the country’s anti-monopoly laws, signaling a potential escalation in the ongoing tech and trade tensions between Beijing and Washington.

    Nvidia Blackwell chips face serious heating issues

    November 18, 2024: Nvidia’s next-generation Blackwell data center processors have significant problems with overheating when installed in high-capacity server racks, forcing redesigns of the racks themselves, according to a report by The Information. These issues have reportedly led to design changes, meaning delays in shipping product and raising concern that its biggest customers, including Google, Meta, and Microsoft, will be able to deploy Blackwell servers according to their schedules.

    Nvidia to power India’s AI factories with tens of thousands of AI chips

    October 24, 2024: Nvidia plans to deploy thousands of Hopper GPUs in India to create AI factories and collaborate with Reliance Industries to develop AI infrastructure.. Yotta Data Services, Tata Communications, E2E Networks, and Netweb will lead the AI factories — large-scale data centers for producing AI. Nvidia added that the expansion will provide nearly 180 exaflops of computing power.

    Nvidia contributes Blackwell rack design to Open Compute Project

    October 15, 2024: Nvidia contributed to the Open Compute Project its Blackwell GB200 NVL72 electro-mechanical designs – including the rack architecture, compute and switch tray mechanicals, liquid cooling and thermal environment specifications, and Nvidia NVLink cable cartridge volumetrics –.

    As global AI energy usage mounts, Nvidia claims efficiency gains of up to 100,000X

    October 08, 2024: As concerns over AI energy consumption ratchet up, chip maker Nvidia is defending what it calls a steadfast commitment to sustainability. The company reports that its GPUs have experienced a 2,000X reduction in energy use over the last 10 years in training and a 100,000X energy reduction over that same time in generating tokens.

    Accenture forms new Nvidia business group focused on agentic AI adoption

    October 4, 2024: Accenture and Nvidia announced an expanded partnership focused on helping customers rapidly scale AI adoption. Accenture said the new group will use Accenture’s AI Refinery platform — built on the Nvidia AI stack, including Nvidia AI Foundry, Nvidia AI Enterprise, and Nvidia Omniverse — to help clients create a foundation for use of agentic AI.

    IBM expands Nvidia GPU options for cloud customers

    October 1, 2024: IBM expanded access to Nvidia GPUs on IBM Cloud to help enterprise customers advance their AI implementations, including large language model (LLM) training. IBM Cloud users can now access Nvidia H100 Tensor Core GPU instances in virtual private cloud and managed Red Hat OpenShift environments.

    Oracle to offer 131,072 Nvidia Blackwell GPUs via its cloud

    September 12, 2024: Oracle started taking pre-orders for 131,072 Nvidia Blackwell GPUs in the cloud via its Oracle Cloud Infrastructure (OCI) Supercluster to aid large language model (LLM) training and other use cases, the company announced at the CloudWorld 2024 conference.  The launch of an offering that provides these many Blackwell GPUs, also known as Grace Blackwell (GB) 200, is significant as enterprises globally are faced with the unavailability of high-bandwidth memory (HBM) — a key component used in making GPUs.

    Why is the DOJ investigating Nvidia?

    September 11, 2024: After a stock sell-off following its quarterly earnings report, Nvidia’s pain was aggravated by news that the Department of Justice is escalating its investigation into the company for anticompetitive practices. According to a Bloomberg report, the DOJ sent a subpoena to Nvidia as part of a probe into alleged antitrust practices.

    Cisco, HPE, Dell announce support for Nvidia’s pretrained AI workflows

    September 4, 2024: Cisco, HPE, and Dell are using Nvidia’s new AI microservices blueprints to help enterprises streamline the deployment of generative AI applications. Nvidia’s announced its NIM Agent Blueprints, a catalogue of pretrained, customizable AI workflows that are designed to provide a jump-start for developers creating AI applications. NIM Agent Blueprints target a number of use cases, including customer service, virtual screening for computer-aided drug discovery, and a multimodal PDF data extraction workflow for retrieval-augmented generation (RAG) that can ingest vast quantities of data.

    Nvidia reportedly trained AI models on YouTube data

    August 4, 2024: Nvidia scraped huge amounts of data from YouTube to train its AI models, even though neither Youtube nor individual YouTube channels approved the move, according to leaked documents. Among other things, Nvidia reportedly used the YouTube data to train its deep learning model Cosmos, an algorithm for automated driving, a human-like AI avatar, and Omniverse, a tool for building 3D worlds.

    Can Intel’s new chips compete with Nvidia in the AI universe?

    June 9, 2024: Intel is aiming its next-generation X86 processors at AI tasks, even though the chips won’t actually run AI workloads themselves.mAt Computex, Intel announced its Xeon 6 processor line, talking up what it calls Efficient-cores (E-cores) that it said will deliver up to 4.2 times the performance of Xeon 5 processors. The first Xeon 6 CPU is the Sierra Forest version (6700 series) a more performance-oriented line, Granite Rapids with Performance cores (P-cores or 6900 series), will be released next quarter.

    Everyone but Nvidia joins forces for new AI interconnect

    May 30, 2024: A clear sign of Nvidia’s dominance is when Intel and AMD link arms to deliver a competing product. That’s what happened when AMD and Intel – along with Broadcom, Cisco, Google, Hewlett Packard Enterprise, Meta and Microsoft – formed the Ultra Accelerator Link (UALink) Promoter Group to develop high-speed interconnections between AI processors.

    Nvidia to build supercomputer for federal AI research

    May 15, 2024: The U.S. government will use an Nvidia DGX SuperPOD to provide researchers and developers access to much more computing power than they have had in the past to produce generative AI advances in areas such as climate science, healthcare and cybersecurity.

    Nvidia, Google Cloud team to boost AI startups

    April 11, 2024: Alphabet’s Google Cloud unveiled a slew of new products and services at Google Cloud Next 2024, among them a program to help startups and small businesses build generative AI applications and services. The initiative brings together the Nvidia Inception program for startups and the Google for Startups Cloud Program.

    Nvidia GTC 2024 wrap-up: Blackwell not the only big news

    March 29, 2024: Nvidia’s GDC is in our rearview mirror, and there was plenty of news beyond the major announcement of the Blackwell architecture and the massive new DGX systems powered by it. Here’s a rundown of some of the announcements you might have missed.

    Nvidia expands partnership with hyperscalers to boost AI training and development

    March 19, 2024: Nvidia extended its existing partnerships with hyperscalers Amazon Web Services (AWS), Google Cloud Platform, Microsoft Azure, and Oracle Cloud Infrastructure, to make available its latest GPUs and foundational large language models and to integrate its software across their platforms.

    Nvidia launches Blackwell GPU architecture

    March 18, 2024: Nvidia kicked off its GTC 2024 conference with the formal launch of Blackwell, its next-generation GPU architecture due at the end of the year. Blackwell uses a chiplet design, to a point. Whereas AMD’s designs have several chiplets, Blackwell has two very large dies that are tied together as one GPU with a high-speed interlink that operates at 10 terabytes per second, according to Ian Buck, vice president of HPC at Nvidia.

    Cisco, Nvidia target secure AI with expanded partnership

    February 9, 2024: Cisco and Nvidia expanded their partnership to offer integrated software and networking hardware that promises to help customers more easily spin up infrastructure to support AI applications. The agreement deepens both companies’ strategy to expand the role of Ethernet networking for AI workloads in the enterprise. It also gives both companies access to each other’s sales and support systems.

    Nvidia and Equinix partner for AI data center infrastructure

    January 9, 2024: Nvidia partnered with data center giant Equinix to offer what the vendors are calling Equinix Private AI with Nvidia DGX, a turnkey solution for companies that are looking to get into the generative AI game but lack the data center infrastructure and expertise to do it.


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → Surfshark

  • Faster, Smaller AI Model Found for Image Geolocation

    Faster, Smaller AI Model Found for Image Geolocation

    Imagine playing a new, slightly altered version of the game GeoGuessr. You’re faced with a photo of an average U.S. house, maybe two floors with a front lawn in a cul-de-sac and an American flag flying proudly out front. But there’s nothing particularly distinctive about this home, nothing to tell you the state it’s in or where the owners are from.

    You have two tools at your disposal: your brain, and 44,416 low-resolution, bird’s-eye-view photos of random places across the United States and their associated location data. Could you match the house to an aerial image and locate it correctly?

    I definitely couldn’t, but a new machine learning model likely could. The software, created by researchers at China University of Petroleum (East China), searches a database of remote sensing photos with associated location information to match the streetside image—of a home or a commercial building or anything else that can be photographed from a road—to an aerial image in the database. While other systems can do the same, this one is pocket-size compared to others and super accurate.

    At its best (when faced with a picture that has a 180 degree field of view), it succeeds up to 97 percent of the time in the first stage of narrowing down location. That’s better than or within two percentage points of all the other models available for comparison. Even under less-than-ideal conditions, it performs better than many competitors. When pinpointing an exact location, it’s correct 82 percent of the time, which is within three points of the other models.

    But this model is novel for its speed and memory savings. It is at least twice as fast as similar ones and uses less than a third the memory they require, according to the researchers. The combination makes it valuable for applications in navigation systems and the defense industry.

    “We train the AI to ignore the superficial differences in perspective and focus on extracting the same ‘key landmarks’ from both views, converting them into a simple, shared language,” explains Peng Ren, who develops machine learning and signal processing algorithms at China University of Petroleum (East China).

    The software relies on a method called deep cross-view hashing. Rather than try to compare each pixel of a street view picture to every single image in the giant bird’s-eye-view database, this method relies on hashing, which means transforming a collection of data—in this case, street-level and aerial photos—into a string of numbers unique to the data.

    To do that, the China University of Petroleum research group employs a type of deep learning model called a vision transformer that splits images into small units and finds patterns among the pieces. The model may find in a photo what it’s been trained to identify as a tall building or circular fountain or roundabout, and then encode its findings into number strings. ChatGPT is based on similar architecture, but finds patterns in text instead of images. (The “T” in “GPT” stands for “transformer.”)

    The number that represents each picture is like a fingerprint, says Hongdong Li, who studies computer vision at the Australian National University. The number code captures unique features from each image that allow the geolocation process to quickly narrow down possible matches.

    In the new system, the code associated with a given ground-level photo gets compared to those of all of the aerial images in the database (for testing, the team used satellite images of the United States and Australia), yielding the five closest candidates for aerial matches. Data representing the geography of the closest matches is averaged using a technique that weighs locations closer to each other more heavily to reduce the impact of outliers, and out pops an estimated location of the street view image.

    The new mechanism for geolocation was published last month in IEEE Transactions on Geoscience and Remote Sensing.

    Fast and memory efficient

    “Though not a completely new paradigm,” this paper “represents a clear advance within the field,” Li says. Because this problem has been solved before, some experts, like Washington University in St. Louis computer scientist Nathan Jacobs, are not as excited. “I don’t think that this is a particularly groundbreaking paper,” he says.

    But Li disagrees with Jacobs—he thinks this approach is innovative in its use of hashing to make finding images matches faster and more memory efficient than conventional techniques. It uses just 35 megabytes, while the next smallest model Ren’s team examined requires 104 megabytes, about three times as much space.

    The method is more than twice as fast as the next fastest one, the researchers claim. When matching street-level images to a dataset of aerial photography of the United States, the runner-up’s time to match was around 0.005 seconds—the Petroleum group was able to find a location in around 0.0013 seconds, almost four times faster.

    “As a result, our method is more efficient than conventional image geolocalization techniques,” says Ren, and Li confirms that these claims are credible. Hashing “is a well-established route to speed and compactness, and the reported results align with theoretical expectations,” Li says.

    Though these efficiencies seem promising, more work is required to ensure this method will work at scale, Li says. The group did not fully study realistic challenges like seasonal variation or clouds blocking the image, which could impact the robustness of the geolocation matching. Down the line, this limitation can be overcome by introducing images from more distributed locations, Ren says.

    Still, long-term applications (beyond a super advanced GeoGuessr) are worth considering now, experts say.

    There are some trivial uses for an efficient image geolocation, such as automatically geotagging old family photos, says Jacobs. But on the more serious side, navigation systems could also exploit a geolocation method like this one. If GPS fails in a self-driving car, another way to quickly and precisely find location could be useful, Jacobs says. Li also suggests it could play a role in emergency response within the next five years.

    There may also be applications in defense systems. Finder, a 2011 project from the Office of the Director of National Intelligence, aimed to help intelligence analysts learn as much as they could about photos without metadata using reference data from sources including overhead images, a goal that could be accomplished with models similar to this new geolocation method.

    Jacobs puts the defense application into context: If a government agency sent a photo of a terrorist training camp without metadata, how can the site be geolocated quickly and efficiently? Deep cross-view hashing might be of some help.


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → Ecovacs

  • Teaching AI to Predict What Cells Will Look Like Before Running Any Experiments

    Teaching AI to Predict What Cells Will Look Like Before Running Any Experiments

    This is a sponsored article brought to you by MBZUAI.

    If you’ve ever tried to guess how a cell will change shape after a drug or a gene edit, you know it’s part science, part art, and mostly expensive trial-and-error. Imaging thousands of conditions is slow; exploring millions is impossible.

    A new paper in Nature Communications proposes a different route: simulate those cellular “after” images directly from molecular readouts, so you can preview the morphology before you pick up a pipette. The team calls their model MorphDiff, and it’s a diffusion model guided by the transcriptome, the pattern of genes turned up or down after a perturbation.

    At a high level, the idea flips a familiar workflow. High-throughput imaging is a proven way to discover a compound’s mechanism or spot bioactivity but profiling every candidate drug or CRISPR target isn’t feasible. MorphDiff learns from cases where both gene expression and cell morphology are known, then uses only the L1000 gene expression profile as a condition to generate realistic post-perturbation images, either from scratch or by transforming a control image into its perturbed counterpart. The claim is that competitive fidelity on held-out (unseen) perturbations across large drug and genetic datasets plus gains on mechanism-of-action (MOA) retrieval can rival real images.

    aspect_ratioLogo with connected black dots next to the words Mohamed bin Zayed University of Artificial Intelligence

    This research led by MBZUAI researchers starts from a biological observation: gene expression ultimately drives proteins and pathways that shape what a cell looks like under the microscope. The mapping isn’t one-to-one, but there’s enough shared signal for learning. Conditioning on the transcriptome offers a practical bonus too: there’s simply far more publicly accessible L1000 data than paired morphology, making it easier to cover a wide swath of perturbation space. In other words, when a new compound arrives, you’re likely to find its gene signature which MorphDiff can then leverage.

    Under the hood, MorphDiff blends two pieces. First, a Morphology Variational Autoencoder (MVAE) compresses five-channel microscope images into a compact latent space and learns to reconstruct them with high perceptual fidelity. Second, a Latent Diffusion Model learns to denoise samples in that latent space, steering each denoising step with the L1000 vector via attention.

    Diagram depicting cell painting analysis pipeline, including dataset curation and perturbation modeling. Wang et al., Nature Communications (2025), CC BY 4.0

    Diffusion is a good fit here: it’s intrinsically robust to noise, and the latent space variant is efficient enough to train while preserving image detail. The team implements both gene-to-image (G2I) generation (start from noise, condition on the transcriptome) and image-to-image (I2I) transformation (push a control image toward its perturbed state using the same transcriptomic condition). The latter requires no retraining thanks to an SDEdit-style procedure, which is handy when you want to explain changes relative to a control.

    It’s one thing to generate photogenic pictures; it’s another to generate biologically faithful ones. The paper leans into both: on the generative side, MorphDiff is benchmarked against GAN and diffusion baselines using standard metrics like FID, Inception Score, coverage, density, and a CLIP-based CMMD. Across JUMP (genetic) and CDRP/LINCS (drug) test splits, MorphDiff’s two modes typically land first and second, with significance tests run across multiple random seeds or independent control plates. The result is consistent: better fidelity and diversity, especially on OOD perturbations where practical value lives.

    The bigger picture is that generative AI has finally reached a fidelity level where in-silico microscopy can stand in for first-pass experiments.

    More interesting for biologists, the authors step beyond image aesthetics to morphology features. They extract hundreds of CellProfiler features (textures, intensities, granularity, cross-channel correlations) and ask whether the generated distributions match the ground truth.

    In side-by-side comparisons, MorphDiff’s feature clouds line up with real data more closely than baselines like IMPA. Statistical tests show that over 70 percent of generated feature distributions are indistinguishable from real ones, and feature-wise scatter plots show the model correctly captures differences from control on the most perturbed features. Crucially, the model also preserves correlation structure between gene expression and morphology features, with higher agreement to ground truth than prior methods, evidence that it’s modeling more than surface style.

    Graphs and images comparing different computational methods in biological data analysis. Wang et al., Nature Communications (2025), CC BY 4.0

    The drug results scale up that story to thousands of treatments. Using DeepProfiler embeddings as a compact morphology fingerprint, the team demonstrates that MorphDiff’s generated profiles are discriminative: classifiers trained on real embeddings also separate generated ones by perturbation, and pairwise distances between drug effects are preserved.

    Charts comparing accuracy across morphing methods for image synthesis techniques in four panels. Wang et al., Nature Communications (2025), CC BY 4.0

    That matters for the downstream task everyone cares about: MOA retrieval. Given a query profile, can you find reference drugs with the same mechanism? MorphDiff’s generated morphologies not only beat prior image-generation baselines but also outperform retrieval using gene expression alone, and they approach the accuracy you get using real images. In top-k retrieval experiments, the average improvement over the strongest baseline is 16.9 percent and 8.0 percent over transcriptome-only, with robustness shown across several k values and metrics like mean average precision and folds-of-enrichment. That’s a strong signal that simulated morphology contains complementary information to chemical structure and transcriptomics which is enough to help find look-alike mechanisms even when the molecules themselves look nothing alike.

    MorphDiff’s generated morphologies not only beat prior image-generation baselines but also outperform retrieval using gene expression alone, and they approach the accuracy you get using real images.

    The paper also lists some current limitations that hint at potential future improvements. Inference with diffusion remains relatively slow; the authors suggest plugging in newer samplers to speed generation. Time and concentration (two factors that biologists care about) aren’t explicitly encoded due to data constraints; the architecture could take them as additional conditions when matched datasets become available. And because MorphDiff depends on perturbed gene expression as input, it can’t conjure morphology for perturbations that lack transcriptome measurements; a natural extension is to chain with models that predict gene expression for unseen drugs (the paper cites GEARS as an example). Finally, generalization inevitably weakens as you stray far from the training distribution; larger, better-matched multimodal datasets will help, as will conditioning on more modalities such as structures, text descriptions, or chromatin accessibility.

    What does this mean in practice? Imagine a screening team with a large L1000 library but a smaller imaging budget. MorphDiff becomes a phenotypic copilot: generate predicted morphologies for new compounds, cluster them by similarity to known mechanisms, and prioritize which to image for confirmation. Because the model also surfaces interpretable feature shifts, researchers can peek under the hood. Did ER texture and mitochondrial intensity move the way we’d expect for an EGFR inhibitor? Did two structurally unrelated molecules land in the same phenotypic neighborhood? Those are the kinds of hypotheses that accelerate mechanism hunting and repurposing.

    The bigger picture is that generative AI has finally reached a fidelity level where in-silico microscopy can stand in for first-pass experiments. We’ve already seen text-to-image models explode in consumer domains; here, a transcriptome-to-morphology model shows that the same diffusion machinery can do scientifically useful work such as capturing subtle, multi-channel phenotypes and preserving the relationships that make those images more than eye candy. It won’t replace the microscope. But if it reduces the number of plates you have to run to find what matters, that’s time and money you can spend validating the hits that count.


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → Aiper

  • 2025 global network outage report and internet health check

    2025 global network outage report and internet health check

    The reliability of services delivered by ISPs, cloud providers and conferencing services is critical for enterprise organizations. ThousandEyes, a Cisco company, monitors how providers are handling any performance challenges and provides Network World with a weekly roundup of events that impact service delivery. Read on to see the latest analysis, and stop back next week for another update on the performance of cloud providers and ISPs.

    Note: We have archived prior-year outage updates, including our 2024 report, 2023 report, and Covid-19 coverage.

    Internet Report for Oct. 6-Oct. 12

    ThousandEyes reported 185 global network outage events across ISPs, cloud service provider networks, collaboration app networks, and edge networks (including DNS, content delivery networks, and security as a service) during the week of October 6 through October 12. The total of outage events decreased by 18% compared to the 226 outages from the week prior. Specific to the U.S., there were 113 outages, which is down 14% from 132 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages decreased from 133 last week to 124, down 7%. In the U.S., ISP outages fell from 77 to 70, down 9% week-over-week.
    • Public cloud network outages: Globally, public cloud network outages decreased from 46 to 22, down 52% compared to the previous week. In the U.S., public cloud network outages fell from 38 to 19, down 50%.
    • Collaboration app network outages: Both global and U.S. collaboration application network outages remained at zero, unchanged from the previous week.

    Two Notable Outages

    On October 9, GitHub, a U.S.-based software development and version control platform headquartered in San Francisco, California, experienced an outage that impacted some of its users and customers across multiple regions, including the U.S., the U.K., France, India and Hong Kong. The outage, which lasted a total of 29 minutes over a forty-minute period, was first observed around 10:40 AM EDT and appeared to be centered on GitHub nodes located in Washington, D.C. The outage was cleared around 11:20 AM. Click here for an interactive view.

    On October 7, Lumen, a U.S. based Tier 1 carrier (previously known as CenturyLink), experienced an outage that affected customers and downstream partners across multiple regions including the U.S., Brazil, Hong Kong, the U.K., India, the Philippines, France, Australia, and Canada. The outage, lasting a total of 41 minutes over a period of 56 minutes, was first observed around 12:24 PM EDT and appeared to initially be centered on Lumen nodes located in Seattle, WA.  Around five minutes into the outage, nodes located in Seattle, WA, were joined by nodes located in Portland, OR, and Dallas, TX, in exhibiting outage conditions. This increase in affected nodes and locations appeared to coincide with a rise in the number of impacted regions, downstream customers and partners. A further five minutes later the nodes located in Portland, OR, appeared to clear and were replaced by nodes located in Atlanta, GA. Around 25 minutes after first being observed, all nodes with the exception of those located in Seattle, WA, appeared to clear. A further 10 minutes later the nodes located in Seattle, WA, were joined by nodes located in Washington, D.C., in exhibiting outage conditions. Around 11 minutes after appearing to clear, nodes located in Seattle, WA, once again began exhibiting outage conditions. The outage was cleared around 1:20 PM EDT.  Click here for an interactive view.

    Internet Report for Sept. 29-Oct. 5

    ThousandEyes reported 226 global network outage events across ISPs, cloud service provider networks, collaboration app networks, and edge networks (including DNS, content delivery networks, and security as a service) during the week of September 29 through October 5. The total of outage events decreased by 26% compared to the 305 outages from the week prior. Specific to the U.S., there were 132 outages, which is down 23% from 171 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages decreased from 172 last week to 133, down 23%. In the U.S., ISP outages fell from 84 to 77, down 8% week-over-week.
    • Public cloud network outages: Globally, public cloud network outages decreased from 66 to 46, down 30% compared to the previous week. In the U.S., public cloud network outages fell from 45 to 38, down 16%.
    • Collaboration app network outages: Both global and U.S. collaboration application network outages remained at zero, unchanged from the previous week.

    Two Notable Outages

    On October 2, UUNET Verizon, acquired by Verizon in 2006 and now operating as Verizon Business, experienced an outage that affected customers and partners across multiple regions, including the U.S. and Canada. The outage, which lasted 58 minutes, was first observed around 12:10 AM EDT and appeared to focus on Verizon Business nodes in Buffalo, NY. The outage was resolved around 1:10 AM EDT. Click here for an interactive view.

    On October 2, Arelion (formerly known as Telia Carrier), a global Tier 1 provider headquartered in Stockholm, Sweden, experienced an outage that impacted customers and downstream partners across multiple regions, including the U.S., Canada, and Mexico. The disruption, which lasted a total of 39 minutes over a one hour and 5-minute period, was first observed around 10:55 PM EDT and appeared to center on nodes located in Dallas, TX, and Los Angeles, CA. The outage was cleared around 11:50 PM EDT. Click here for an interactive view.

    Internet Report for Sept. 22-28

    ThousandEyes reported 305 global network outage events across ISPs, cloud service provider networks, collaboration app networks, and edge networks (including DNS, content delivery networks, and security as a service) during the week of September 22-28. The total of outage events increased by 1% compared to the 302 outages from the week prior. Specific to the U.S., there were 171 outages, which is up 6% from 161 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages increased from 137 to 172, a 26% rise. In the U.S., ISP outages doubled, rising from 42 to 84, a 100% increase week-over-week.
    • Public cloud network outages: Globally, public cloud network outages fell sharply from 102 to 66, a 35% decrease. In the U.S., public cloud network outages dropped from 81 to 45, a 44% decrease.
    • Collaboration app network outages: Both global and U.S. collaboration application network outages remained at zero, unchanged from the previous week.

    Two Notable Outages

    On September 26, Zayo Group, a U.S. based Tier 1 carrier headquartered in Boulder, Colorado, experienced an outage that impacted some of its partners and customers across multiple regions, including the U.S., the Netherlands, Mexico, Switzerland, Ireland, the U.K., Germany, India, Spain, and Romania. The outage, which lasted a total of 32 minutes over a 53-minute period, was first observed around 11:28 PM EDT and appeared to be centered on Zayo nodes located in Chicago, IL.  Around ten minutes into the outage, a number of nodes located in Chicago, IL, appeared to clear. This decrease in the number of nodes exhibiting outage conditions, appeared to coincide with a decrease in the number of impacted downstream partners. The outage was cleared around 12:35 AM EDT. Click here for an interactive view.

    On September 24, Cogent Communications, a multinational transit provider based in the U.S., experienced an outage that impacted multiple downstream providers and customers across multiple regions including the U.S., Canada, the Netherlands, the U.K., Spain, Belgium, Switzerland, Luxembourg, Bulgaria, and Germany. The outage, which lasted a total of 33 minutes, was distributed across a series of occurrences over a period of one hour and fifteen minutes. The first occurrence of the outage was observed around 5:20 AM EDT and initially seemed to be centered on Cogent nodes located in Salt Lake City, UT. Five minutes after appearing to clear, nodes located in Salt Lake City, UT, once again began exhibiting outage conditions, this time joined by nodes located in San Jose, CA, and Seattle, WA. Around an hour after first being observed the nodes located in San Jose, CA, Seattle, WA, and Salt Lake City, UT, were joined by nodes located in Santa Clara, CA, in exhibiting outage conditions. This increase in the number of nodes and locations exhibiting outage conditions, appeared to coincide with an increase in the number of impacted downstream partners. A further five minutes after appearing to clear, nodes located in Salt Lake City, UT, once again began exhibiting outage conditions. The outage was cleared around 6:35 AM EDT.  Click here for an interactive view.

    Internet Report for Sept. 15-21

    ThousandEyes reported 302 global network outage events across ISPs, cloud service provider networks, collaboration app networks, and edge networks (including DNS, content delivery networks, and security as a service) during the week of September 15-21. The total of outage events were nearly flat compared to the 301 outages from the week prior. Specific to the U.S., there were 161 outages, which is down 13% from 184 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages decreased from 145 to 137, a 6% decrease. In the U.S., ISP outages decreased from 68 to 42, a 38% decrease.
    • Public cloud network outages: Globally, public cloud network outages increased from 96 to 102, a 6% increase. In the U.S., however, outages decreased from 85 to 81, a 5% decrease.
    • Collaboration app network outages: Both global and U.S. collaboration application network outages remained at zero, unchanged from the previous week.

    Two Notable Outages

    On September 19, Lumen, a U.S. based Tier 1 carrier (previously known as CenturyLink), experienced an outage that affected customers and downstream partners across multiple regions including the U.S., South Africa, Estonia, the Netherlands, Finland, the U.K., Chile, France, Argentina, Colombia, Brazil, Mexico, and Belgium. The outage, lasting 9 minutes, was first observed around 12:56 AM EDT and appeared to be centered on Lumen nodes located in New York, NY. Around 4 minutes after first being observed, a number of nodes exhibiting outage conditions located in New York, appeared to clear. This decrease appeared to coincide with a drop in the number of impacted partners and customers. The outage was cleared around 1:10 AM EDT. Click here for an interactive view.

    On September 19, Cogent Communications, a multinational transit provider based in the U.S., experienced an outage that impacted multiple downstream providers as well as Cogent customers across various regions, including the U.S., Spain, Egypt, Vietnam, and Australia. The outage, which lasted 12 minutes, was first observed around 6:36 AM EDT and appeared to initially center on Cogent nodes located in Los Angeles, CA, and San Francisco, CA. Around four minutes after first being observed, nodes located in Los Angeles, CA appeared to clear, and were replaced by nodes located in Portland, OR, in exhibiting outage conditions.  A further five minutes later nodes located in San Francisco, CA, appeared to clear, leaving just the nodes located in Portland, OR, in exhibiting outage condition. This decrease in nodes exhibiting outage conditions appeared to coincide with a drop in the number of impacted partners and customers. The outage was cleared around 6:50 AM EDT. Click here for an interactive view.

    Internet Report for Sept. 8-14

    ThousandEyes reported 301 global network outage events across ISPs, cloud service provider networks, collaboration app networks, and edge networks (including DNS, content delivery networks, and security as a service) during the week of September 8-14. The total of outage events decreased by 2% compared to the 308 outages from the week prior. Specific to the U.S., there were 184 outages, which is up 11% from 166 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages decreased from 158 to 145, an 8% decrease. In the U.S., however, ISP outages increased from 61 to 68, an 11% rise.
    • Public cloud network outages: Globally, public cloud network outages increased from 87 to 96, a 10% increase. In the U.S., outages rose from 71 to 85, a 20% increase.
    • Collaboration app network outages: Both global and U.S. collaboration application network outages remained at zero, unchanged from the previous week.

    Two Notable Outages

    On September 10, Arelion (formerly known as Telia Carrier), a global Tier 1 provider headquartered in Stockholm, Sweden, experienced an outage that impacted customers and downstream partners across multiple regions, including the U.S., Slovakia, Poland, Germany, Spain, Finland, Canada, France, and Mexico. The disruption, which lasted a total of 14 minutes over a 29-minute period, was first observed around 2:36 AM EDT and appeared to initially center on nodes located in Dallas, TX. Twenty-four minutes after first being observed, nodes in Dallas were joined by nodes in San Antonio, TX, in exhibiting outage conditions. The outage was cleared around 3:05 AM EDT. Click here for an interactive view.

    On September 12, Hurricane Electric, a network transit provider based in Fremont, CA, experienced an outage that impacted customers and downstream partners across multiple regions, including the U.S., the U.K., Australia, Japan, Sweden, Singapore, New Zealand, Ireland, and Switzerland. The outage was first observed around 4:10 AM EDT and lasted a total of 6 minutes over a period of 35 minutes. The outage initially appeared to be centered on Hurricane Electric nodes located in San Jose, CA. Eleven minutes after appearing to clear, nodes located in San Jose, CA, once again began exhibiting outage conditions and were joined by nodes located in San Francisco, CA. The outage was cleared at around 4:45 AM EDT. Click here for an interactive view.

    Internet Report for Sept. 1-7

    ThousandEyes reported 308 global network outage events across ISPs, cloud service provider networks, collaboration app networks, and edge networks (including DNS, content delivery networks, and security as a service) during the week of September 1-7. The total of outage events increased by 18% compared to the 260 outages from the week prior. Specific to the U.S., there were 166 outages, which is up 22% from 136 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages increased from 135 to 158, a 17% increase. In the U.S., however, ISP outages held steady at 61.
    • Public cloud network outages: Globally, public cloud network outages increased from 65 to 87, a 34% increase. In the U.S., outages rose from 48 to 71, a 48% increase.
    • Collaboration app network outages: Both global and U.S. collaboration application network outages dropped down to zero, compared to 4 the week prior.

    Two Notable Outages

    On September 2, Lumen, a U.S.-based Tier 1 carrier (previously known as CenturyLink), experienced an outage that affected customers and downstream partners across multiple regions, including the U.S., the U.K., the Philippines, Ireland, India, Canada, Malaysia, Argentina, Poland, and Bulgaria. The outage, lasting a total of one hour and 57 minutes over a period of two hours and 13 minutes, was first observed around 2:52 PM EDT and appeared to initially be centered on Lumen nodes located in Denver, CO. Around 28 minutes into the outage, nodes located in Denver, CO, were joined by nodes located in Chicago, IL, in exhibiting outage conditions. This increase appeared to coincide with a rise in the number of impacted regions, downstream customers, and partners. The outage was cleared around 5:05 PM EDT. Click here for an interactive view.

    On September 6, GTT Communications, a Tier 1 provider headquartered in Tysons, VA, experienced an outage that impacted some of its partners and customers across multiple regions, including the U.S., Germany, France, Canada, Singapore, the U.K., Japan, Brazil, Taiwan, Argentina, India, South Africa, Italy, Spain, Egypt, Luxembourg, Australia, and Mexico. The outage, which lasted 42 minutes, was first observed around 6:42 AM EDT and appeared to initially be centered on GTT nodes located in Washington, D.C., New York, NY, Frankfurt, Germany, and Toronto, Canada. Around five minutes later, nodes located in Washington, D.C., New York, NY, Frankfurt, Germany, and Toronto, Canada, were joined by nodes located in London, England, Los Angeles, CA, and Chicago, IL, in exhibiting outage conditions. This increase appeared to coincide with a rise in the number of impacted regions, downstream customers, and partners. A further five minutes later, all nodes except those located in Washington, D.C., appeared to clear. The outage was cleared around 7:25 AM EDT. Click here for an interactive view.

    Internet Report for Aug. 25-31

    ThousandEyes reported 260 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of August 25-31 The total of outage events remained unchanged compared to the 260 outages from the week prior. Specific to the U.S., there were 136 outages, which is up 11% from 123 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages decreased from 154 to 135, a 12% decrease compared to the previous week. In the U.S., however, ISP outages increased from 52 to 61, a 17% increase.
    • Public cloud network outages: Globally, public cloud network outages increased from 57 to 65, a 14% increase compared to the previous week. In the U.S., outages increased from 44 to 48, a 9% increase.
    • Collaboration app network outages: Both global and U.S. collaboration application network outages were up from zero to 4 outages last week.

    Two notable outages

    On August 26, Cogent Communications, a multinational transit provider based in the U.S., experienced an outage that impacted multiple downstream providers as well as Cogent customers across various regions, including the U.S., Australia, Colombia, Brazil, Mexico, the U.K., Canada, Hong Kong, Armenia, Spain, South Korea, India, Chile, Germany, Taiwan, Costa Rica, Japan, The Netherlands, Uruguay, Argentina, and Italy. The outage, which lasted 22 minutes, was first observed around 2:30 AM EDT and appeared to initially center on Cogent nodes located in Chicago, IL, New York, NY, Seattle, WA, Los Angeles, CA, San Jose, CA, Washington, D.C., and Mexico City, Mexico. Around five minutes after first being observed, nodes located in Chicago, Seattle, WA, Washington, D.C., and Mexico City, Mexico, appeared to clear and were replaced by nodes located in Toronto, Canada, in exhibiting outage conditions. This decrease in nodes exhibiting outage conditions appeared to coincide with a drop in the number of impacted partners and customers. The outage was resolved around 2:55 AM EDT. Click here for an interactive view.

    On August 27, Microsoft experienced an outage on its network that impacted some downstream partners and access to services running on Microsoft environments across the U.S. The outage, which lasted 34 minutes, was first observed around 6:50 AM EDT and appeared to be centered on Microsoft nodes located in Chicago, IL. Around 20 minutes after first being observed, a number of nodes exhibiting outage conditions located in Chicago, IL, appeared to clear. This decrease in nodes exhibiting outage conditions appeared to coincide with a drop in the number of impacted partners and customers. The outage was cleared around 7:25 AM EDT. Click here for an interactive view.

    Internet Report for Aug. 18-24

    ThousandEyes reported 260 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of August 18-24 That’s an increase of 6% from 246 outages from the week prior. Specific to the U.S., there were 123 outages, which is up 6% from 116 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages increased from 144 to 154, a 7% increase compared to the previous week. In the U.S., ISP outages rose from 44 to 52, an 18% increase.
    • Public cloud network outages: Globally, public cloud network outages decreased from 61 to 57, a 7% decrease compared to the previous week. In the U.S., outages declined from 53 to 44, a 17% decrease.
    • Collaboration app network outages: Both global and U.S. collaboration application network outages recorded 0 outages, down from 1 each the previous week.

    Two notable outages

    On August 19, Arelion (formerly known as Telia Carrier), a global Tier 1 provider headquartered in Stockholm, Sweden, experienced an outage that impacted customers and downstream partners across multiple regions, including the U.S., France, the U.K., Germany, Ireland, Spain, Norway, Saudi Arabia, Poland, Switzerland, Canada, Finland, Mexico, Singapore, India, and The Netherlands. The disruption, which lasted 59 minutes, was first observed around 12:25 AM EDT and appeared to initially center on nodes located in Newark, NJ, and Ashburn, VA. Five minutes after being first observed, the nodes located in Newark, NJ, appeared to clear and were replaced by nodes located in Chicago, IL, in exhibiting outage conditions. A further five minutes later, nodes located in Chicago, IL, appeared to clear, leaving only nodes located in Ashburn, VA, exhibiting outage conditions. This drop also appeared to coincide with a decrease in the number of downstream customers, partners, and regions impacted. Around twenty minutes later, the nodes located in Ashburn, VA, were joined by nodes located in Chicago, IL, and Los Angeles, CA, in exhibiting outage conditions. The outage was cleared around 1:25 AM EDT. Click here for an interactive view.

    On August 20, GTT Communications, a Tier 1 provider headquartered in Tysons, VA, experienced an outage that affected some of its partners and customers across multiple regions including the U.S., Mexico and Canada. The disruption, which lasted a total of 18 minutes over a 39-minute period, was first observed around 4:56 AM EDT and appeared to initially center on GTT nodes in Dallas, TX. Sixteen minutes after appearing to clear, nodes located in Dallas, TX, once again began exhibiting outage conditions. A further five minutes later, the nodes located in Dallas, TX, were joined by nodes located in San Jose, CA, in exhibiting outage conditions. This increase in the number of nodes and locations exhibiting outage conditions appeared to coincide with an increase in the number of impacted downstream partners. The outage was cleared around 5:35 AM EDT. Click here for an interactive view.

    Internet Report for Aug. 11-17

    ThousandEyes reported 246 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of August 11-17 That’s a decrease of 19% from 302 outages from the week prior. Specific to the U.S., there were 116 outages, which is up 2% from 144 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages decreased from 173 to 144, a 17% decrease compared to the previous week. In the U.S., ISP outages were steady at 44.
    • Public cloud network outages: Globally, cloud provider network outages decreased from 79 to 61, a 23% drop compared to the previous week. In the U.S., outages declined slightly from 55 to 53, a 4% decrease.
    • Collaboration app network outages: BBoth global and U.S. collaboration application network outages recorded 1 outage, unchanged from the previous week.

    Two notable outages

    On August 14, Cogent Communications, a multinational transit provider based in the U.S., experienced an outage that impacted multiple downstream providers as well as Cogent customers across various regions, including the U.S., Germany, the U.K., Spain, India, Colombia, the Netherlands, Canada, Luxembourg, South Africa, Belgium, Switzerland, Australia, Denmark, Hong Kong, Mexico, and Turkey. The outage, which lasted 24 minutes, was first observed around 4:20 PM EDT and appeared to initially center on Cogent nodes located in Chicago, IL, New York, NY, Washington, D.C., Cleveland, OH, and Frankfurt, Germany. Around five minutes after first being observed, nodes located in Chicago, IL, and Frankfurt, Germany, appeared to clear and were replaced by nodes located in Amsterdam, The Netherlands, in exhibiting outage conditions. A further five minutes on, nodes located in Amsterdam, The Netherlands, were replaced by nodes located in Boston, MA, in exhibiting outage condition. The outage was resolved around 4:45 PM EDT. Click here for an interactive view.

    On August 15, GTT Communications, a Tier 1 provider headquartered in Tysons, VA, experienced an outage that impacted some of its partners and customers across multiple regions, including the U.S., Germany, France, Canada, the U.K., South Africa, Brazil, Luxembourg, The United Arab Emirates, India, Ireland, Spain, and Australia. The outage, which lasted 9 minutes, was first observed around 9:40 AM EDT and appeared to initially be centered on GTT nodes located in Chicago, IL, Dallas, TX, Washington, D.C., Atlanta, GA, Toronto, Canada, Frankfurt, Germany, Minneapolis, MN, and New York, NY. Around five minutes after first being observed, all the nodes except those located in Chicago, IL, appeared to clear. The outage was cleared around 9:50 AM EDT.  Click here for an interactive view.

    Internet Report for Aug. 4-10

    ThousandEyes reported 302 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of August 4-10. That’s an increase of 61% from 187 outages from the week prior. Specific to the U.S., there were 114 outages, which is up 30% from 88 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages increased from 97 to 173 outages, a 78% increase compared to the week prior. In the U.S., ISP outages increased from 38 to 44 outages, a 16% increase.
    • Public cloud network outages: Globally, cloud provider network outages increased from 43 to 79 outages, an 84% increase compared to the week prior. In the U.S., cloud provider network outages increased from 30 to 55 outages, an 83% increase.
    • Collaboration app network outages: Both global and U.S. collaboration application network outages recorded 1 outage last week, up from zero the week prior.

    Two notable outages

    On August 7, Arelion (formerly known as Telia Carrier), a global Tier 1 provider headquartered in Stockholm, Sweden, experienced an outage that impacted customers and downstream partners across multiple regions, including the U.S. and Mexico. The disruption, which lasted a total of 27 minutes over a one-hour and 39-minute period, was first observed around 1:31 AM EDT and appeared to initially center on nodes located in Mexico City, Mexico, and Guadalajara, Mexico. Twenty-five minutes after appearing to clear, the nodes located in Mexico City, Mexico, and Guadalajara, Mexico, were replaced by nodes located in Los Angeles, CA, in exhibiting outage conditions. By around 2:20 AM EDT, these outage conditions extended to nodes in Atlanta, GA, and Dallas, TX. This increase in affected nodes and locations appeared to coincide with a rise in the number of impacted downstream customers and partners. Fifteen minutes after appearing to clear, nodes located in Miami, FL, began exhibiting outage conditions. The outage was cleared around 3:10 AM EDT. Click here for an interactive view.

    On August 4, UUNET Verizon, acquired by Verizon in 2006 and now operating as Verizon Business, experienced an outage that affected customers and partners across multiple regions, including the U.S., Singapore, Japan, Canada, South Korea, the Netherlands, Australia, Poland, and Hong Kong. The outage, which lasted 11 minutes, was first observed around 11:58 AM EDT, and initially centered on Verizon Business nodes in Washington, D.C. Three minutes into the outage the nodes located in Washington, D.C., were joined by nodes located in Newark, NJ, in exhibiting outage conditions. A further five minutes later the nodes located in Newark, NJ, were replaced by nodes located in Chicago, IL, in exhibiting outage conditions. The outage was cleared around 12:10 PM EDT. Click here for an interactive view.

    Internet Report for July 28-Aug. 3

    ThousandEyes reported 187 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of July 28 through August 3. That’s an increase of 18% from 158 outages from the week prior. Specific to the U.S., there were 88 outages, which is down 4% from 92 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages increased from 84 to 97 outages, a 15% increase compared to the week prior. In the U.S., however, ISP outages decreased from 44 to 38 outages, a 14% decrease.
    • Public cloud network outages: Globally, cloud provider network outages increased from 35 to 43 outages, a 23% increase compared to the week prior. In the U.S., cloud provider network outages increased slightly from 28 to 30 outages, a 7% increase.
    • Collaboration app network outages: Both global and U.S. collaboration application network outages remained at zero recorded outages.

    Two notable outages

    On July 31, Crown Castle Fiber, a U.S.-based fiber infrastructure provider operating assets acquired from Lightower Fiber Networks in 2017, experienced an outage that impacted customers and downstream partners across multiple regions, including the U.S., Sweden, the U.K., the United Arab Emirates, Japan, Canada, Italy, Germany, France, the Netherlands, Spain, South Korea, Mexico, Australia, Brazil, Singapore, and Switzerland. The outage, which lasted a total of one hour and 19 minutes over a two-hour and 9-minute period, was first observed around 9:26 PM EDT and appeared to initially be centered on Lightower Fiber Networks nodes located in Boston, MA. Six minutes after appearing to clear, nodes located in Boston, MA, were replaced by nodes located in New York, NY, Philadelphia, PA, and Stamford, CT, in exhibiting outage conditions. Around 9:50 PM EDT, five minutes after appearing to clear, nodes located in Boston, MA, New York, NY, and Philadelphia, PA, once again began exhibiting outage conditions. A further 30 minutes into the outage, nodes located in Philadelphia, PA, appeared to clear and were replaced by nodes located in Newark, NJ, in exhibiting outage conditions. The outage was cleared around 11:35 PM EDT. Click here for an interactive view.

    On July 29, Microsoft experienced an outage on its network that impacted some downstream partners and access to services running on Microsoft environments across multiple regions, including the U.S., Canada, New Zealand, Brazil, Indonesia, Singapore, Japan, Mexico, Costa Rica, Australia, France, Honduras, the U.K., Hong Kong, South Africa, Taiwan, Argentina, and Panama. The outage, lasting 19 minutes, was first observed around 2:05 AM EDT and appeared to initially center on Microsoft nodes located in Chicago, IL, and Newark, NJ. Around 5 minutes after first being observed, nodes located in Newark, NJ, were replaced by nodes located in Cleveland, OH, in exhibiting outage conditions. A further five minutes later, nodes located in Cleveland, OH, were replaced by nodes located Des Moines, IA, in exhibiting outage conditions. The outage was cleared around 2:25 AM EDT. Click here for an interactive view.

    Internet Report for July 21-27

    ThousandEyes reported 158 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of July 21-27. That’s a decrease of 27% from 216 outages from the week prior. Specific to the U.S., there were 92 outages, which is down 28% from 66 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages decreased from 127 to 84 outages, a 34% decrease compared to the week prior. Similarly in the U.S., ISP outages decreased from 68 to 44 outages, a 35% decrease.
    • Public cloud network outages: Globally, cloud provider network outages decreased from 37 to 35 outages, a 5% decrease compared to the week prior. In the U.S., cloud provider network outages decreased slightly from 29 to 28 outages, a 3% decrease.
    • Collaboration app network outages: Both global and U.S. collaboration application network outages remained at zero recorded outages.

    Two notable outages

    On July 25, Zayo Group, a U.S. based Tier 1 carrier headquartered in Boulder, Colorado, experienced an outage that impacted some of its partners and customers across multiple regions, including the U.S. Canada, India, the U.K., South Africa, the Philippines, Bulgaria, the Netherlands, Hong Kong, Germany, Chile, Mexico, France, Japan, Colombia, Poland, Australia, Malaysia, Honduras, South Korea, Switzerland, Uruguay, Brazil, and Singapore. The outage, which lasted a total of 30 minutes over a fifty-minute period, was first observed around 12:25 AM EDT and appeared to initially be centered on Zayo nodes located in Washington, D.C., New York, NY, Dallas, TX, Toronto, Canada, Los Angeles, CA, and Ashburn, VA.  Around five minutes into the outage, the number of locations exhibiting outage conditions, expanded to include nodes located in Chicago, IL, and Atlanta, GA. This increase in the number of nodes and locations exhibiting outage conditions appeared to coincide with an increase in the number of impacted downstream partners. Around 15 minutes after appearing to clear, nodes located in Chicago, IL, once again appeared to exhibit outage conditions. The outage was cleared around 1:15 AM EDT. Click here for an interactive view.

    On July 25, Cogent Communications, a multinational transit provider based in the U.S., experienced an outage that impacted multiple downstream providers as well as Cogent customers across various regions, including the U.S., Canada and the U.K. The outage, which lasted 19 minutes, was first observed around 12:35 AM EDT and appeared to center on Cogent nodes located in Washington, D.C. Around fifteen minutes after first being observed, the number of nodes located in Washington, D.C., exhibiting outage conditions appeared to increase. This rise in the number of nodes exhibiting outage conditions appeared to coincide with an increase in the number of impacted downstream partners and customers. The outage was resolved around 12:55 AM EDT. Click here for an interactive view.

    Internet Report for July 14-20

    ThousandEyes reported 216 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of July 14-20. That’s an increase of 44% in outages from the week prior. Specific to the U.S., there were 128 outages, which is up 94% from 66 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages increased from 80 to 127 outages, a 59% increase compared to the week prior. In the U.S., ISP outages more than doubled, rising from 33 to 68 outages, a 106% increase.
    • Public cloud network outages: Globally, cloud provider network outages increased from 28 to 37 outages, a 32% increase compared to the week prior. In the U.S., cloud provider network outages increased from 16 to 29 outages, an 81% increase.
    • Collaboration app network outages: Globally, cloud provider network outages increased from 28 to 37 outages, a 32% increase compared to the week prior. In the U.S., cloud provider network outages increased from 16 to 29 outages, an 81% increase.

    Two notable outages

    On July 14, Cloudflare experienced a DNS service outage, impacting users globally who relied on its public DNS resolver. The outage, first observed around 5:50 PM EDT, prevented affected users from resolving domain names and accessing websites and applications. Cloudflare confirmed that the incident resulted from an internal configuration error that caused their DNS public resolver service to become unreachable. The incident lasted approximately one hour, with service fully restored by 6:54 PM EDT after Cloudflare identified and fixed the configuration issue. Click here for an interactive view and here for a detailed analysis.

    On July 17, GTT Communications, a Tier 1 provider headquartered in Tysons, VA, experienced an outage that impacted some of its partners and customers across multiple regions, including the U.S., Canada. Germany and Australia. The outage, which lasted 58 minutes, was first observed around 5:30 AM EDT and appeared to be centered on GTT nodes located in San Jose, CA.  The outage was cleared around 6:30 AM EDT. Click here for an interactive view.

    Internet Report for July 7-13

    ThousandEyes reported 150 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of July 7-13. That’s the same volume of outages as the week prior. Specific to the U.S., there were 66 outages, which is down 15% from 78 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages increased from 66 to 80 outages, a 21% increase compared to the week prior. In the U.S., ISP outages increased slightly from 30 to 33 outages, a 10% increase.
    • Public cloud network outages: Globally, cloud provider network outages decreased from 45 to 28 outages, a 38% decrease compared to the week prior. In the U.S., cloud provider network outages dropped from 33 to 16 outages, a 52% decrease.
    • Collaboration app network outages: For the first time in eight weeks, two collaboration application network outages were recorded globally. In the U.S., collaboration application network outages remained at zero.

    Two notable outages

    On July 10, Hurricane Electric, a network transit provider based in Fremont, CA, experienced an outage that impacted customers and downstream partners across multiple regions, including the U.S., Mexico, Canada, Japan, Hong Kong, India, the Philippines, Singapore, Malaysia, Indonesia, Belgium, Bulgaria, Vietnam, and Thailand. The outage was first observed around 12:30 AM EDT and lasted a total of 30 minutes over a period of 50 minutes. The outage initially appeared to be centered on Hurricane Electric nodes located in Singapore, Hong Kong, and Frankfurt, Germany. Around twenty minutes into the outage, those nodes were joined by nodes located in Chicago, IL, in exhibiting outage conditions. Ten minutes after appearing to clear, nodes located in Singapore, and Frankfurt, Germany, once again began exhibiting outage conditions. The outage was cleared at around 1:30 AM EDT. Click here for an interactive view.

    On July 8, Unitas Global, a US-based network transit provider that merged with PacketFabric in 2023 and is now operating as PacketFabric, experienced an outage that impacted customers and downstream partners across multiple regions, including the U.S., Turkey, Canada, Singapore, the Netherlands, India, Switzerland, Germany, Malaysia, Greece, and France. The outage was first observed around 8:11 PM EDT and lasted a total of 34 minutes over a period of 54 minutes. The outage initially appeared to be centered on Unitas Global nodes located in Washington, D.C. Around six minutes into the outage, the nodes located in Washington, D.C., appeared to clear and were replaced by nodes located in New York, NY, in exhibiting outage conditions. Five minutes after appearing to clear, nodes located in New York, NY, and Washington, D.C., once again began exhibiting outage conditions. This rise in the number of nodes and locations exhibiting outage conditions appeared to coincide with an increase in the number of impacted downstream partners and customers. Around five minutes further on, the nodes located in New York, NY, and Washington, D.C., were temporarily replaced by nodes located in London, England, in exhibiting outage conditions. Five minutes later, the nodes located in London, England, were replaced by nodes located in New York, NY, in exhibiting outage conditions. The outage was cleared at around 9:05 PM EDT. Click here for an interactive view.

    Internet Report for June 30-July 6

    ThousandEyes reported 150 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of June 30-July 6. That’s a decrease of 28% from 208 outages the week prior. Specific to the U.S., there were 78 outages, which is down 39% from 128 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages dropped from 120 to 66 outages, a 45% decrease compared to the week prior. In the U.S., ISP outages decreased from 67 to 30 outages, a 55% decrease.
    • Public cloud network outages: Globally, cloud provider network outages decreased from 49 to 45 outages, an 8% decrease compared to the week prior. In the U.S., cloud provider network outages decreased slightly from 35 to 33 outages, a 6% decrease.
    • Collaboration app network outages: Both globally and in the U.S., collaboration application network outages remained at zero outages for the seventh week in a row.

    Two notable outages

    On June 30, Lumen, a U.S. based Tier 1 carrier (previously known as CenturyLink), experienced an outage that affected customers and downstream partners across multiple regions including the U.S., Mexico, and Canada. The outage, lasting a total of one hour and 18 minutes over a period of one hour and 25 minutes, was first observed around 6:20 AM EDT and appeared to initially be centered on Lumen nodes located in Houston, TX, and Dallas, TX.  Around fifteen minutes into the outage, nodes located in Houston, TX, and Dallas, TX, were joined by nodes located in Atlanta, GA, in exhibiting outage conditions. Five minutes after appearing to clear, nodes located in Dallas, TX, once again appeared to exhibit outage conditions. The outage was cleared around 7:45 AM EDT. Click here for an interactive view.

    On June 30, Amazon experienced some disruption that impacted some of its partners and customers across multiple regions, including the U.S., Mexico, the Philippines, Vietnam, the Netherlands, France, and Brazil. The outage lasted a total of 47 minutes within a three hour and 43-minute period and was first observed around 12:42 PM EDT. It appeared to center on Amazon nodes located in Ashburn, VA. The outage was cleared around 4:25 PM EDT. Click here for an interactive view.

    Internet Report for June 23-29

    ThousandEyes reported 208 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of June 23-29. That’s a decrease of 17% from 252 outages the week prior. Specific to the U.S., there were 128 outages, which is up 20% from 107 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages decreased from 141 to 120 outages, a 15% decrease compared to the week prior. In the U.S., however, ISP outages increased from 48 to 67 outages, a 40% increase.
    • Public cloud network outages: Globally, cloud provider network outages increased from 44 to 49 outages, an 11% increase compared to the week prior. In the U.S., cloud provider network outages increased from 32 to 35 outages, a 9% increase.
    • Collaboration app network outages: Both globally and in the U.S., collaboration application network outages remained at zero outages for the sixth week in a row.

    Two notable outages

    On June 24, Zayo Group, a U.S. based Tier 1 carrier headquartered in Boulder, Colorado, experienced an outage that impacted some of its partners and customers across multiple regions, including the U.S. and Canada. The outage lasted a total of 42 minutes within a one-hour period and was first observed around 3:10 AM EDT. It appeared to center on Zayo Group nodes located in Seattle, WA. Five minutes after appearing to clear, nodes located in Seattle, WA, once again appeared to exhibit outage conditions. The outage was cleared around 4:10 AM EDT. Click here for an interactive view.

    On June 25, UUNET Verizon, acquired by Verizon in 2006 and now operating as Verizon Business, experienced an outage that affected customers and partners across multiple regions, including the U.S., India, Japan, the U.K., Singapore, Germany, the Philippines, Australia, and Hong Kong. The outage lasted a total of 20 minutes over a 30-minute period. The outage was first observed around 1:10 PM EDT and initially centered on Verizon Business nodes in Los Angeles, CA. Around 3 minutes into the outage, the nodes located in Los Angeles, CA, were joined by nodes located in Phoenix, AZ, and Gilbert, AZ, in exhibiting outage conditions. This increase in affected nodes and locations appeared to coincide with a rise in the number of impacted regions, downstream customers, and partners. Around six minutes after appearing to clear, nodes located in Los Angeles, CA, once again began exhibiting outage conditions. The outage was cleared around 1:40 PM EDT. Click here for an interactive view.

    Internet Report for June 16-22

    ThousandEyes reported 252 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of June 16-22. That’s a decrease of 33% from 376 outages the week prior. Specific to the U.S., there were 107 outages, which is down 21% from 135 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages dropped from 285 to 141 outages, a 51% decrease compared to the week prior. In the U.S., ISP outages decreased from 91 to 48 outages, a 47% decrease.
    • Public cloud network outages: Globally, cloud provider network outages increased from 27 to 44 outages, a 63% increase compared to the week prior. In the U.S., cloud provider network outages increased from 17 to 32 outages, an 88% increase.
    • Collaboration app network outages: Both globally and in the U.S., collaboration application network outages remained at zero outages for the fifth week in a row.

    Two notable outages

    On June 17, Hurricane Electric, a network transit provider based in Fremont, CA, experienced an outage that impacted customers and downstream partners across the U.S. The outage was first observed around 3:55 PM EDT and lasted a total of 18 minutes over a 50-minute period. The outage initially appeared to be centered on Hurricane Electric nodes located in Dallas, TX. Around 11 minutes after first being observed, the number of nodes located in Dallas, TX, appeared to rise. This increase in the number of nodes exhibiting outage conditions appeared to coincide with an increase in the number of impacted downstream partners. The outage was cleared at around 4:45 PM EDT. Click here for an interactive view.

    On June 20, Cogent Communications, a multinational transit provider based in the U.S., experienced an outage that impacted multiple downstream providers as well as Cogent customers across various regions, including the U.S., Ireland, the U.K., India, the Netherlands, Spain, Portugal, Luxembourg, Belgium, Austria, Italy, Canada, Switzerland, South Africa, Brazil, Australia, Qatar, Romania, Singapore, the United Arab Emirates, Sweden, Poland, France, and Germany. The outage, which lasted one hour and 12 minutes, was first observed around 8:20 AM EDT and initially appeared to center on Cogent nodes located in London, England, and Paris, France. Around ten minutes after first being observed, the nodes located in London, England, and Paris, France, were joined by nodes located in York, England, in exhibiting outage conditions. Thirty-five minutes further into the outage, nodes located in Slough, England, joined the other nodes in exhibiting outage conditions. After a further ten minutes, all nodes except those located in London, England, and York, England, appeared to clear. The outage was resolved around 9:35 AM EDT. Click here for an interactive view.

    Internet report for June 9-15

    ThousandEyes reported 376 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of June 9-15. That’s an increase of 24% from 304 outages the week prior. Specific to the U.S., there were 135 outages, which is up 75% from 77 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages increased from 233 to 285 outages, a 22% increase compared to the week prior. In the U.S., ISP outages increased from 32 to 91 outages, a 184% jump.
    • Public cloud network outages: Globally, cloud provider network outages remained the same as the week prior, recording 27 outages. In the U.S., cloud provider network outages decreased from 18 to 17 outages, a 6% decrease.
    • Collaboration app network outages: Both globally and in the U.S., collaboration application network outages remained at zero outages for the fourth week in a row.

    Two notable outages

    On June 13, AT&T experienced an outage on their network that impacted AT&T customers and partners across the U.S. The outage, lasting around 14 minutes, was first observed around 2:00 AM EDT, appearing to center on AT&T nodes located in Dallas, TX. The outage was cleared at around 2:15 AM EDT. Click here for an interactive view.

    On June 12, Cogent Communications, a multinational transit provider based in the U.S., experienced an outage that impacted multiple downstream providers and customers across the U.S. The outage lasted a total of forty minutes, distributed across a series of occurrences over a period of around 2 hours. The first occurrence was observed around 8:01 PM EDT and initially seemed to be centered on Cogent nodes located in San Francisco, CA. Around 10 minutes after first being observed, nodes in San Francisco, CA, were joined by nodes located in San Jose, CA, in exhibiting outage conditions again. Five minutes after appearing to clear, nodes located in San Jose, CA, once again began exhibiting outage conditions. A further thirty minutes later, nodes located in San Jose, CA, were once again joined by nodes located in San Francisco, CA, in exhibiting outage conditions. The outage was cleared around 10:00 PM EDT. Click here for an interactive view.

    Internet report for June 2-8

    ThousandEyes reported 304 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of June 2-8. That’s a decrease of 8% from 241 outages the week prior. Specific to the U.S., there were 77 outages, which is down 9% from 84 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages increased from 165 to 233 outages, a 41% increase compared to the week prior. In the U.S., ISP outages decreased slightly from 35 to 32 outages, a 9% decrease.
    • Public cloud network outages: Globally, cloud provider network outages decreased from 30 to 27 outages, a 10% decrease compared to the week prior. In the U.S., cloud provider network outages remained the same as the week prior, recording 18 outages.
    • Collaboration app network outages: Both globally and in the U.S., collaboration application network outages remained at zero outages for the third week in a row. 

    Two notable outages

    On June 4, Cogent Communications, a multinational transit provider based in the U.S., experienced an outage that impacted multiple downstream providers as well as Cogent customers across various regions, including the U.S., Mexico, the U.K., the Netherlands, Spain, Germany, Hong Kong, Brazil, and Canada. The outage, which lasted 11 minutes, was first observed around 12:16 AM EDT and initially appeared to center on Cogent nodes located in Mexico City, Mexico, and Los Angeles, CA. Around five minutes after first being observed, the nodes located in Mexico City, Mexico, appeared to clear, while nodes located in Los Angeles, CA, were joined by nodes located in Washington, D.C., and Dallas, TX, in exhibiting outage conditions. This rise in the number of nodes and locations exhibiting outage conditions appeared to coincide with an increase in the number of impacted downstream partners and customers. A further five minutes into the outage, the nodes located in Los Angeles, CA, Washington, D.C., and Dallas, TX, appeared to clear and were replaced by nodes located in McAllen, TX, and Monterrey, Mexico, in exhibiting outage conditions. The outage was resolved around 12:30 AM EDT. Click here for an interactive view.

    On June 3, Microsoft experienced an outage on its network that impacted some downstream partners and access to services running on Microsoft environments within the U.S. The outage, lasting a total of 5 minutes, over a 21-minute period, was first observed around 12:04 PM EDT and appeared to initially center on Microsoft nodes located in New York, NY. Around 2 minutes after first being observed, nodes located in New York, NY, were replaced by nodes located in Newark, NJ, in exhibiting outage conditions. Twelve minutes after appearing to clear, nodes located in Chicago, IL, began exhibiting outage conditions. The outage was cleared around 12:25 PM EDT. Click here for an interactive view.

    Internet report for May 26-June 1

    ThousandEyes reported 241 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of May 26-June 1. That’s a decrease of 37% from 383 outages the week prior. Specific to the U.S., there were 84 outages, which is down 43% from 147 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages dropped from 293 to 165 outages, a 44% decrease compared to the week prior. In the U.S., ISP outages decreased from 96 to 35 outages, a 64% drop.
    • Public cloud network outages: Globally, cloud provider network outages slightly decreased from 31 to 30 outages, a 3% decrease compared to the week prior. In the U.S., cloud provider network outages decreased from 21 to 18 outages, a 14% decrease.
    • Collaboration app network outages: Both globally and in the U.S., collaboration application network outages remained at zero outages for the second week in a row. 

    Two notable outages

    On May 29, Arelion (formerly known as Telia Carrier), a global Tier 1 provider headquartered in Stockholm, Sweden, experienced an outage that impacted customers and downstream partners across multiple regions, including the U.S., India, Singapore, Germany, Spain, the U.K., Malaysia, Mexico, South Africa, the Czech Republic, Peru, Chile, Brazil, Indonesia, Italy, Canada, the Philippines, Sweden, Portugal, Hungary, Australia, Argentina, Denmark, Hong Kong, Thailand, Finland, Norway, and Japan. The disruption, which lasted 31 minutes, was first observed around 6:32 PM EDT and appeared to initially center on nodes located in Singapore.  Around 30 minutes after first being observed the nodes located in Singapore were joined by nodes located in Los Angeles, CA, and Minnesota, in exhibiting outage condition. This rise in nodes and locations exhibiting outage conditions also appeared to coincide with an increase in the number of downstream customers, partners, and regions impacted. The outage was cleared around 7:05 PM EDT. Click here for an interactive view.

    On May 28, Cloudflare, a U.S. headquartered web infrastructure and website security company that provides content delivery network services, suffered an interruption that impacted its customers across the U.S. The outage, lasting 12 minutes over a period of 28 minutes, was first observed at around 8:22 PM EDT and appeared to center on Cloudflare nodes located in Chicago, IL. Ten minutes after appearing to clear, the nodes located in Chicago, IL, once again began exhibiting outage conditions, and were briefly joined by nodes located in Nebraska. The outage was cleared around 8:50 PM EDT. Click here for an interactive view.

    Internet report for May 19-25

    ThousandEyes reported 383 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of May 19-25. That’s a decrease of 29% from 536 outages the week prior. Specific to the U.S., there were 147 outages, which is down 1% from 149 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages decreased from 432 to 293 outages, a 32% decrease compared to the week prior. In the U.S., ISP outages increased from 84 to 96 outages, a 14% increase.
    • Public cloud network outages: Globally, cloud provider network outages decreased from 38 to 31 outages, an 18% decrease compared to the week prior. In the U.S., cloud provider network outages decreased from 24 to 21 outages, a 13% decrease.
    • Collaboration app network outages: Both globally and in the U.S., collaboration application network outages dropped down to zero, down from one outage respectively the week prior. 

    Two notable outages

    On May 22, Cogent Communications, a multinational transit provider based in the U.S., experienced an outage that impacted multiple downstream providers as well as Cogent customers across various regions, including the U.S., Canada, Australia, Japan, Hong Kong, Germany, Brazil, South Korea, the Netherlands, Luxembourg, Ireland, the U.K., Poland, and Switzerland. The outage, which lasted a total of 2 hours and 34 minutes over a three-hour and 10-minute period, was first observed around 11:30 PM EDT and initially appeared to center on Cogent nodes located in Denver, CO, and San Jose, CA. Around ten minutes after first being observed, nodes located in Sacramento, CA, San Francisco, CA, and Salt Lake City, UT also began to exhibit outage conditions. A further five minutes into the outage, the nodes located in Sacramento, CA, and Salt Lake City, UT, appeared to clear and were replaced by nodes located in Frankfurt, Germany, Singapore, and Tokyo, Japan, in exhibiting outage conditions. Around twenty minutes later, nodes exhibiting outage conditions had expanded to include nodes located in San Jose, CA, Oakland, CA, Salt Lake City, UT, New York, NY, Newark, NJ, San Francisco, CA, and Washington, D.C. This rise in the number of nodes and locations exhibiting outage conditions appeared to coincide with an increase in the number of impacted downstream partners, and customers. Around 30 minutes after appearing to clear, nodes located in Denver, CO, once again appeared to exhibit outage conditions. Around twenty minutes later, the nodes located in Denver, CO, were joined first by nodes loaded in Chicago, IL, and then Dallas, TX, in exhibiting outage conditions. The outage was resolved around 2:40 AM EDT. Click here for an interactive view.

    On May 22, Amazon experienced a disruption that impacted some of its partners and customers across multiple regions, including the U.S., Mexico, Costa Rica, Colombia, Brazil, and the Netherlands. The outage, lasting 13 minutes, was first observed around 12:00 AM EDT, and appeared to be centered on Amazon nodes located in Ashburn, VA. Around five minutes after first being observed the number of nodes exhibiting outage conditions in Ashburn, VA, increased. This increase in affected nodes appeared to coincide with an increase in the number of impacted downstream customers and partners. The outage was cleared around 12:15 AM EDT. Click here for an interactive view.

    Internet report for May 12-18

    ThousandEyes reported 536 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of May 12-18. That’s an increase of 8% from 495 outages the week prior. Specific to the U.S., there were 149 outages, which is up 54% from 97 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages increased from 410 to 432 outages, a 5% increase compared to the week prior. In the U.S., ISP outages increased from 52 to 84 outages, a 62% increase.
    • Public cloud network outages: Globally, cloud provider network outages increased from 32 to 38 outages, a 19% increase compared to the week prior. In the U.S., cloud provider network outages increased from 20 to 24 outages, a 20% increase.
    • Collaboration app network outages: Globally, collaboration application network outages dropped from two outages to one outage. In the U.S., there was one collaboration application network outage, up from zero the week prior. 

    Two notable outages

    On May 16, AT&T experienced an outage on their network that impacted AT&T customers and partners across the U.S. The outage, lasting around 43 minutes, was first observed around 6:55 AM EDT, appearing to center on AT&T nodes located in Dallas, TX. Fifteen minutes after first being observed, the number of nodes exhibiting outage conditions located in Dallas, TX, appeared to drop. This decrease appeared to coincide with a drop in the number of impacted partners and customers. The outage was cleared at around 7:40 AM EDT. Click here for an interactive view.

    On May 13, GTT Communications, a Tier 1 provider headquartered in Tysons, VA, experienced an outage that impacted some of its partners and customers across multiple regions, including the U.S., India, the U.K., Germany, Singapore, Japan, and Spain. The outage, which lasted 14 minutes, was first observed around 3:30 AM EDT and appeared to be centered on GTT nodes located in Los Angeles, CA. Around five minutes into the outage, the number of nodes located in Los Angeles, CA, appeared to rise. This increase appeared to coincide with an increase in the number of impacted downstream partners. The outage was cleared around 3:45 AM EDT. Click here for an interactive view.

    Internet report for May 5-11

    ThousandEyes reported 495 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of May 5-11. That’s an increase of 11% from 444 outages the week prior. Specific to the U.S., there were 97 outages, which is up 2% from 95 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages increased from 361 to 410 outages, a 14% increase compared to the week prior. In the U.S., ISP outages slightly decreased from 54 to 52 outages, a 4% decrease.
    • Public cloud network outages: Globally, cloud provider network outages increased from 21 to 32 outages, a 52% increase compared to the week prior. In the U.S., cloud provider network outages doubled from 10 to 20 outages.
    • Collaboration app network outages: Globally, collaboration application network outages dropped from four to two outages. In the U.S., collaboration application network outages dropped to zero.

    Two notable outages

    On May 5, GTT Communications, a Tier 1 provider headquartered in Tysons, VA, experienced an outage that affected some of its partners and customers across multiple regions including the U.S., the Netherlands, Germany, and Japan. The disruption, which lasted a total of 13 minutes over a 20-minute period, was first observed around 2:45 PM EDT and appeared to center on GTT nodes in Los Angeles, CA. Five minutes after appearing to clear, nodes located in Los Angeles, CA, once again began exhibiting outage conditions. The outage was cleared around 3:05 PM EDT. Click here for an interactive view.

    On May 6, Microsoft experienced an outage on its network that impacted some downstream partners and access to services running on Microsoft environments in multiple regions, including the U.S., Canada, Colombia, Peru, the U.K., Brazil, Vietnam, India, and Mexico. The outage, which lasted one hour and 38 minutes, was first observed around 9:00 AM EDT and appeared to initially center on Microsoft nodes located in Dallas, TX. Around 20 minutes after first being observed, nodes located in Dallas, TX, were joined by nodes located in San Antonio, TX, Arlington, TX, and Houston, TX, in exhibiting outage conditions. Around 25 minutes further into the outage, the nodes located in Houston, TX, appeared to clear and were replaced by nodes located in Des Moines, IA, in exhibiting outage conditions. A further twenty-five minutes in, the nodes located in Arlington, TX, appeared to clear, replaced by nodes located in Atlanta, GA, and Newark, NJ, in exhibiting outage conditions. The outage was cleared around 10:40 AM EDT. Click here for an interactive view.

    Internet report for April 28-May 4

    ThousandEyes reported 444 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of April 28-May 4. That’s an increase of 28% from 348 outages the week prior. Specific to the U.S., there were 95 outages, which is up 38% from 69 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages increased from 273 to 361 outages, a 32% increase compared to the week prior. In the U.S., ISP outages increased from 32 to 54 outages, a 69% increase.
    • Public cloud network outages: Globally, cloud provider network outages increased from 13 to 21 outages, a 62% increase compared to the week prior. In the U.S., cloud provider network outages jumped from one to 10.
    • Collaboration app network outages: Both globally and in the U.S., collaboration application network outages remained the same as the week prior, recording 4 outages.

    Two notable outages

    On April 29, NTT America, a global Tier 1 provider and subsidiary of NTT Global, experienced a series of outages over a one-hour and 15-minute period. These outages impacted multiple downstream providers and customers across various regions, including the U.S., Philippines, Japan, Hong Kong, Argentina, Thailand, Australia, Singapore, South Korea, Germany, Canada, Mexico, and Brazil. The outage, which lasted a total of 50 minutes, was first observed around 10:40 AM EDT and appeared to be centered on NTT nodes located in Ashburn, VA. Approximately ten minutes after first being observed, the number of nodes exhibiting outage conditions in Ashburn, VA, increased. This increase in affected nodes appeared to coincide with an increase in the number of impacted downstream customers and partners. The outage was cleared around 11:55 AM EDT. Click here for an interactive view.

    On May 1, UUNET Verizon, acquired by Verizon in 2006 and now operating as Verizon Business, experienced an outage that affected customers and partners across multiple regions, including the U.S., the U.K., Germany, the Netherlands, Puerto Rico, Switzerland, Mexico, Indonesia, Japan, France, Ireland, Colombia, India, South Korea, the Philippines, Switzerland, Finland, and Sweden. The outage lasted a total of 26 minutes over a 50-minute period. The outage was first observed around 12:55 AM EDT and initially centered on Verizon Business nodes in Boston, MA. Around 19 minutes after appearing to clear, Verizon nodes located in Washington, D.C., began exhibiting outage conditions. Five minutes further into the outage the nodes located in Washington, D.C., were joined by nodes located in New York, NY, Cleveland, OH, Dallas, TX, Atlanta, GA, Brooklyn, NY, Newark, NJ, and Arlington, VA, in exhibiting outage conditions. This increase in affected nodes and locations appeared to coincide with a rise in the number of impacted downstream customers and partners. The outage was cleared around 1:55 AM EDT. Click here for an interactive view.

    Internet report for April 21-27

    ThousandEyes reported 348 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of April 21-27. That’s an increase of 13% from 309 outages the week prior. Specific to the U.S., there were 69 outages, which is equal to the number of U.S. outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages increased from 238 to 273 outages, a 15% increase compared to the week prior. In the U.S., however, ISP outages decreased from 37 to 32 outages, a 14% decrease.
    • Public cloud network outages: Globally, cloud provider network outages dropped from 17 to 13 outages, a 24% drop compared to the week prior. In the U.S., cloud provider network outages dropped down from four to one.
    • Collaboration app network outages: Globally, collaboration application network outages increased from three to four outages. In the U.S., collaboration application network outages increased from two to four outages.

    Two notable outages

    On April 23, GTT Communications, a Tier 1 provider headquartered in Tysons, VA, experienced an outage that impacted some of its partners and customers across multiple regions, including the U.S., Germany, and the Netherlands. The outage, which lasted 8 minutes, was first observed around 4:45 AM EDT and appeared to initially be centered on GTT nodes located in San Francisco, CA. Around five minutes into the outage, those nodes appeared to clear and were replaced by nodes located in New York NY, in exhibiting outage conditions. This change appeared to coincide with an increase in the number of impacted downstream partners. The outage was cleared around 4:55 AM EDT. Click here for an interactive view.

    On April 25, Zayo Group, a U.S. based Tier 1 carrier headquartered in Boulder, Colorado, experienced an outage that impacted some of its partners and customers across multiple regions, including the U.S. and Israel. The outage, which lasted 6 minutes, was first observed around 1:10 AM EDT and appeared to be centered on Zayo nodes located in Dallas, TX.  Around five minutes into the outage, the number of nodes located in Dallas, TX, exhibiting outage conditions increased. This increase appeared to coincide with an increase in the number of impacted downstream partners. The outage was cleared around 1:20 AM EDT. Click here for an interactive view.

    Internet report for April 14-20

    ThousandEyes reported 309 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of April 14-20. That’s a decrease of 45% from 559 outages the week prior. Specific to the U.S., there were 69 outages, which is down 67% from 212 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages decreased from 378 to 238 outages, a 37% decrease compared to the week prior. In the U.S., ISP outages decreased from 106 to 37 outages, a 65% decrease.
    • Public cloud network outages: Globally, cloud provider network outages decreased from 99 to 17 outages, an 83% drop compared to the week prior. In the U.S., cloud provider network outages dropped from 59 to 4 outages, a 93% drop.
    • Collaboration app network outages: Globally, collaboration application network outages decreased from four to three outages. In the U.S., collaboration application network outages fell from four to two outages.

    Two notable outages

    On April 16, Zoom experienced a global outage that affected users worldwide. The issue was first observed around 2:25 PM EDT and lasted approximately two hours, with the problem resolved by 4:12 PM EDT. However, some disruptions continued to be observed until 4:30 PM EDT. The outage was caused by a problem at the DNS layer, which impacted connectivity to the zoom.us domain and disrupted all associated services. Click here for an interactive view, and here for a detailed analysis.

    On April 17, GTT Communications, a Tier 1 provider headquartered in Tysons, VA, experienced an outage that affected some of its partners and customers across the U.S. The disruption, which lasted a total of 16 minutes over a 34-minute period, was first observed around 12:16 AM EDT and appeared to center on GTT nodes in Miami, FL. Ten minutes after appearing to clear, nodes located in Miami, FL, once again began exhibiting outage conditions. The outage was cleared around 12:50 AM EDT. Click here for an interactive view.

    Internet report for April 7-13

    ThousandEyes reported 559 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of April 7-13. That’s an increase of 38% from 404 outages the week prior. Specific to the U.S., there were 212 outages, which is up 41% from 150 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages increased from 281 to 378 outages, a 35% increase compared to the week prior. In the U.S., ISP outages increased from 63 to 106 outages, a 68% increase.
    • Public cloud network outages: Globally, cloud provider network outages increased from 71 to 99 outages, a 39% increase compared to the week prior. In the U.S., cloud provider network outages increased from 55 to 59 outages, a 7% increase.
    • Collaboration app network outages: Both globally and in the U.S., collaboration application network outages went from zero to four outages.

    Two notable outages

    On April 8, Arelion (formerly known as Telia Carrier), a global Tier 1 provider headquartered in Stockholm, Sweden, experienced an outage that impacted customers and downstream partners across multiple regions, including the U.S., Luxembourg, Germany, Argentina, Sweden, Canada, and Singapore. The disruption, which lasted a total of 57 minutes over a one hour and 21-minute period, was first observed around 12:14 AM EDT and appeared to initially center on nodes located in Boston, MA. Fifteen minutes after appearing to clear, nodes located in Los Angeles, CA, began exhibiting outage conditions. By around 1:10 AM EDT, these outage conditions extended to nodes in Newark, NJ, and Ashburn, VA. This increase appeared to coincide with a rise in the number of impacted downstream customers and partners. Ten minutes later, the nodes in Los Angeles, CA, were replaced by nodes located in Boston, MA, and New York, NY, in exhibiting outage conditions. The outage was cleared around 1:35 AM EDT. Click here for an interactive view.

    On April 13, GTT Communications, a Tier 1 provider headquartered in Tysons, VA, experienced an outage that impacted some of its partners and customers across multiple regions, including the U.S., Germany, the Netherlands, Canada, and Japan. The outage, which lasted 16 minutes, was first observed around 11:35 AM EDT and appeared to initially be centered on GTT nodes located in Chicago, IL, and New York, NY.  Around ten minutes into the outage, the number of nodes located in Chicago, IL, exhibiting outage conditions increased. While nodes in all other locations had appeared to clear by this time, this increase in the number of nodes exhibiting outage conditions located in Chicago, IL, appeared to coincide with an increase in the number of impacted downstream partners. The outage was cleared around 11:55 AM EDT. Click here for an interactive view.

    Internet report for March 31-April 6

    ThousandEyes reported 404 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of March 31-April 6. That’s a decrease of 23% from 525 outages the week prior. Specific to the U.S., there were 150 outages, which is down 29% from 212 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages decreased from 293 to 281 outages, a 4% decrease compared to the week prior. In the U.S., ISP outages increased slightly from 62 to 63, a 2% gain.
    • Public cloud network outages: Globally, cloud provider network outages decreased significantly, dropping 57% from 165 to 71 outages. In the U.S., cloud provider network outages decreased from 109 to 55 outages, a 50% decrease.
    • Collaboration app network outages: Both globally and in the U.S., collaboration application network outages dropped to zero outages, down from 1 outage respectively the week prior.

    Two notable outages

    On April 2, Amazon experienced some disruption that impacted some of its partners and customers across multiple regions, including the U.S., Mexico, the U.K., Germany, Colombia, Brazil, and Egypt. The outage lasted a total of 19 minutes within a 50-minute period and was first observed around 8:30 PM EDT. It appeared to center on Amazon nodes located in Ashburn, VA. The outage was cleared around 9:20 PM EDT. Click here for an interactive view.

    On April 3, AT&T experienced an outage on their network that impacted AT&T customers and partners across the U.S. The outage lasted approximately 19 minutes and was first observed around 1:35 AM PM EDT, appearing to center on AT&T nodes located in Cambridge, MA. Ten minutes after first being observed, the number of nodes exhibiting outage conditions located in Cambridge, MA, appeared to rise. This increase in nodes exhibiting outage conditions appeared to coincide with a rise in the number of impacted partners and customers. The outage was cleared at around 1:55 AM EDT. Click here for an interactive view.

    Internet report for March 24-30

    ThousandEyes reported 525 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of March 24-30. That’s a decrease of 21% from 664 outages the week prior. Specific to the U.S., there were 212 outages, which is down 26% from 287 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages decreased from 316 to 293 outages, a 7% decrease compared to the week prior. In the U.S., ISP outages decreased slightly from 63 to 62, a 2% decrease.
    • Public cloud network outages: Globally, cloud provider network outages dropped from 258 to 165 outages. In the U.S., cloud provider network outages decreased from 162 to 109 outages.
    • Collaboration app network outages: Both globally and in the U.S., collaboration application network outages dropped from eight outages to one.

    Two notable outages

    On March 24, Zayo Group, a U.S. based Tier 1 carrier headquartered in Boulder, Colorado, experienced an outage that impacted some of its partners and customers across multiple regions, including the U.S., Canada, Ireland, South Africa, Brazil, and Colombia. The outage lasted a total of 11 minutes within a 25-minute period and was first observed around 1:00 AM EDT. It appeared to center on Zayo Group nodes located in San Antonio, TX. Six minutes after appearing to clear, nodes located in San Antonio, TX, once again appeared to exhibit outage conditions. The outage was cleared around 1:25 AM EDT. Click here for an interactive view.

    On March 24, Cogent Communications, a multinational transit provider based in the U.S., experienced an outage that impacted multiple downstream providers as well as Cogent customers across various regions, including the U.S., Canada, Spain, Ireland, Singapore, Qatar, the U.K., Germany, South Korea, Switzerland, India, Belgium, France, and Japan. The outage, which lasted 10 minutes, was first observed around 1:00 AM EDT and initially appeared to center on Cogent nodes located in Washington, D.C., and Denver, CO. Around two minutes after first being observed, the nodes located in Denver, CO, appeared to clear, while nodes located in Washington, D.C., were joined by nodes located in Los Angeles, CA, Ashburn, VA, and Phoenix, AZ, in exhibiting outage conditions. This rise appeared to coincide with an increase in the number of impacted downstream partners, and customers. The outage was resolved around 1:15 AM EDT.  Click here for an interactive view.

    Internet report for March 17-23

    ThousandEyes reported 664 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of March 17-23. That’s up a whopping 76% from 378 outages the week prior. Specific to the U.S., there were 287 outages, which is up 86% from 154 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages increased from 237 to 316 outages, a 33% increase compared to the week prior. In the U.S., however, ISP outages decreased slightly from 65 to 63, a 3% decrease.
    • Public cloud network outages: Globally, cloud provider network outages jumped from 98 to 258 outages. In the U.S., cloud provider network outages more than doubled, increasing from 66 to 162 outages.
    • Collaboration app network outages: Both globally and in the U.S., collaboration application network outages jumped from zero to 8 outages.

    Two notable outages

    On March 20, Lumen, a U.S. based Tier 1 carrier, experienced an outage that affected customers and downstream partners across multiple regions, including the U.S., Australia, the U.K., Brazil, Germany, Spain, Colombia, Mexico, South Africa, and the Netherlands. The outage, lasting a total of 49 minutes, over a period of 55 minutes, was first observed around 12:40 AM EDT and appeared to initially be centered on Lumen nodes located in New York, NY.  Around five minutes into the outage, those nodes were joined by nodes located in Newark, NJ, in exhibiting outage conditions. Around five minutes later, in the same two regions, there was an increase in nodes exhibiting outage conditions. This rise appeared to coincide with an increase in the number of impacted downstream partners and customers. Around forty-five minutes after first being observed, all nodes appeared to clear, however five minutes later, nodes located in New York, NY, appeared to exhibit outage conditions once again. The outage was cleared around 1:45 AM EDT. Click here for an interactive view.

    On March 21, Cogent Communications, a multinational transit provider based in the U.S., experienced an outage that impacted multiple downstream providers as well as Cogent customers across various regions, including the U.S., Mexico, Luxembourg, the U.K., Spain, Australia, Singapore, Japan, Mexico, Ireland, Poland, Hong Kong, Saudi Arabia, Peru, Ghana, Portugal, Germany, Turkey, Switzerland, Qatar, India, and Portugal. The outage, which lasted 13 minutes, was first observed around 6:35 AM EDT and initially appeared to center on Cogent nodes located in Washington, D.C. Around ten minutes after first being observed, those nodes were joined by nodes located in New York, NY, and Atlanta, GA, in exhibiting outage conditions. This rise appeared to coincide with an increase in the number of impacted downstream partners, and customers. The outage was resolved around 6:50 AM EDT. Click here for an interactive view.

    Internet report for March 10-16

    ThousandEyes reported 378 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of March 10-16. That’s down 11% from 425 outages the week prior. Specific to the U.S., there were 154 outages, which is down 23% from 199 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages increased from 219 to 237 outages, an 8% increase compared to the week prior. In the U.S., ISP outages decreased from 81 to 65, a 20% decrease.
    • Public cloud network outages: Globally, cloud provider network outages decreased from 111 to 98 outages. In the U.S., cloud provider network outages decreased from 69 to 66 outages.
    • Collaboration app network outages: Both globally and in the U.S., there were zero collaboration application network outages.

    Two notable outages

    On March 13, Time Warner Cable, a U.S.-based ISP, experienced a disruption that affected numerous customers and partners across the U.S. The outage, lasting 47 minutes, was first observed around 1:00 AM EDT. Initially it appeared to be centered on Time Warner Cable nodes in New York, NY, and Dallas, TX. Five minutes into the outage, those nodes were joined by nodes located in Houston, TX, in exhibiting outage conditions. Five minutes later, all nodes except those located in Dallas, TX, appeared to clear. The outage was cleared around 1:50 AM EDT. Click here for an interactive view.

    On March 15, Lumen, a U.S. based Tier 1 carrier, experienced an outage that affected customers and downstream partners across the U.S. The outage, lasting a total of 53 minutes over a one-hour period, was first observed around 3:15 AM EDT and appeared to be centered on Lumen nodes located in Salt Lake City, UT. Forty-nine minutes into the outage, the Salt Lake City nodes appeared to clear before exhibiting outage conditions again about five minutes later. The outage was cleared around 4:15 AM EDT. Click here for an interactive view.

    Internet report for March 3-9

    ThousandEyes reported 425 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of March 3-9. That’s down 5% from 447 outages the week prior. Specific to the U.S., there were 199 outages, which is up 5% from 189 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages decreased from 261 to 219 outages, a 16% decrease compared to the week prior. In the U.S., ISP outages increased from 73 to 81, an 11% increase.
    • Public cloud network outages: Globally, cloud provider network outages decreased from 120 to 111 outages. In the U.S., cloud provider network outages decreased from 82 to 69 outages.
    • Collaboration app network outages: Globally, collaboration application network outages increased from zero to two outages. In the U.S., collaboration application network outages remained at zero for the second week in a row.

    Two notable outages

    On March 3, Microsoft experienced an outage on its network that impacted some downstream partners and access to services running on Microsoft environments in multiple regions, including the U.S., Canada, Costa Rica, Egypt, South Africa, Saudi Arabia, Germany, the Netherlands, France, Sweden, Brazil, Singapore, India, and Mexico. The outage, which lasted a total of one hour and 22 minutes over a two-hour period, was first observed around 11:05 AM EST and appeared to initially center on Microsoft nodes located in Toronto, Canada, and Cleveland, OH. Around 20 minutes after appearing to clear the nodes located in Toronto, Canada, and Cleveland, OH, were joined by nodes located in Newark, NJ, in exhibiting outage conditions. A further ten minutes later, nodes located in Newark, NJ, were replaced by nodes located in New York, NY, in exhibiting outage conditions. Around ten minutes further into the outage, the nodes located in New York, NY, appeared to clear and were replaced by nodes located in Los Angeles, CA, and Des Moines, IA, in exhibiting outage conditions. Around fifty-five minutes after first being observed, nodes located in Los Angeles, CA, and Des Moines, IA, appeared to clear and were replaced by nodes located in Hamburg, Germany, in exhibiting outage conditions. Five minutes later, the nodes located in Hamburg, Germany, were replaced by nodes located in Des Moines, IA. A further twenty-five minutes later, the nodes located in Des Moines, IA, were replaced by nodes located in Paris, France, before themselves clearing five minutes later, leaving just nodes located in Toronto, Canada, and Cleveland, OH, exhibiting outage conditions. Fifteen minutes after appearing to clear, nodes located in Cleveland, OH, and New York, NY, once again appeared to exhibit outage conditions. The outage was cleared around 1:05 PM EST. Click here for an interactive view.

    On March 5, Arelion, a global Tier 1 provider headquartered in Stockholm, Sweden, experienced an outage that impacted customers and downstream partners across multiple regions, including the U.S., Japan, the Netherlands, Brazil, Australia, Costa Rica, the U.K., Colombia, and Germany. The disruption, which lasted 35 minutes, was first observed around 2:10 AM EST and appeared to center on nodes located in Ashburn, VA.  Around 30 minutes after first being observed, the number of nodes exhibiting outage conditions located in Ashburn, VA, appeared to increase. This rise in nodes exhibiting outage conditions also appeared to coincide with an increase in the number of downstream customers, partners, and regions impacted. The outage was cleared around 2:50 AM EST. Click here for an interactive view.

    Internet report for Feb. 24-March 2

    ThousandEyes reported 447 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of Feb. 24-March 2. That’s up 13% from 397 outages the week prior. Specific to the U.S., there were 189 outages, which is down 5% from 199 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages increased from 190 to 261 outages, a 37% increase compared to the week prior. In the U.S., ISP outages increased from 64 to 73, a 14% increase.
    • Public cloud network outages: Globally, cloud provider network outages decreased from 137 to 120 outages. In the U.S., cloud provider network outages decreased from 96 to 82 outages.
    • Collaboration app network outages: Both globally and in the U.S., collaboration application network outages dropped back down to zero.

    Two notable outages

    On February 28, Cogent Communications, a multinational transit provider based in the U.S., experienced an outage that impacted multiple downstream providers as well as Cogent customers across various regions, including the U.S., Japan, the Philippines, the U.K., Romania, Thailand, South Korea, Hong Kong, New Zealand, Australia, Germany, Mexico, the Netherlands, South Africa, France, Luxembourg, India, Singapore, and Canada. The outage, which lasted 29 minutes, was first observed around 1:05 AM EST and initially appeared to center on Cogent nodes located in Los Angeles, CA, and San Jose, CA. Around five minutes after first being observed, the nodes located in Los Angeles, CA, appeared to clear and were replaced by nodes located in Washington, D.C., in exhibiting outage conditions. The outage was resolved around 1:25 AM EST. Click here for an interactive view.

    On February 24, Arelion (formerly known as Telia Carrier), a global Tier 1 provider headquartered in Stockholm, Sweden, experienced an outage that impacted customers and downstream partners across multiple regions, including the U.S., India, France, Ireland, Spain, Kenya, Singapore, the Netherlands, Mexico, Belgium, Romania, Germany, New Zealand, Hungary, Thailand, Australia, and Hong Kong. The disruption, which lasted a total of 18 minutes, was first observed around 1:00 PM EST and appeared to initially center on nodes located in Los Angeles, CA. Ten minutes after first being observed the nodes located in Los Angeles, CA appeared to clear, and were replaced by nodes located in Ashburn, VA, in exhibiting outage conditions. A further five minutes later, the number of nodes exhibiting outage conditions located in Ashburn, VA, appeared to increase. This rise in nodes exhibiting outage conditions also appeared to coincide with an increase in the number of downstream customers, partners, and regions impacted. The outage was cleared around 1:20 PM EST. Click here for an interactive view.

    Internet report for Feb. 17-23

    ThousandEyes reported 397 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of Feb. 17-23. That’s nearly even with the week prior, when there were 398 outages. Specific to the U.S., there were 199 outages, which is up 2% from 196 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages decreased from 205 to 190 outages, a 7% decrease compared to the week prior. In the U.S., ISP outages decreased from 88 to 64, a 27% decrease.
    • Public cloud network outages: Globally, cloud provider network outages increased from 96 to 137 outages. In the U.S., cloud provider network outages increased from 69 to 96 outages.
    • Collaboration app network outages: Globally, there was one collaboration application network outage, same as the week prior. In the U.S., there was one collaboration application outage, ending a four-week run of zero outages.

    Two notable outages

    On February 17, UUNET Verizon, acquired by Verizon in 2006 and now operating as Verizon Business, experienced an outage that affected customers and partners across multiple regions, including the U.S., Singapore, the Netherlands, the Philippines, Brazil, Germany, Switzerland, Canada, the U.K., Ireland, Japan, South Korea, Australia, France, and India. The outage lasted a total of an hour, over a one hour and 15-minute period. The outage was first observed around 2:00 PM EST and initially centered on Verizon Business nodes in Washington, D.C. Five minutes into the outage the nodes located in Washington, D.C., were joined by nodes located in Brooklyn, NY, in exhibiting outage conditions. A further five minutes later the nodes located in Brooklyn, NY, were replaced by nodes located in New York, NY, in exhibiting outage conditions. The outage was cleared around 3:15 PM EST. Click here for an interactive view.

    On February 18, Cogent Communications, a multinational transit provider based in the U.S., experienced an outage that impacted multiple downstream providers as well as Cogent customers across various regions, including the U.S., Brazil, Japan, the Philippines, Ghana, Hong Kong, India, the U.K., Singapore, Indonesia, Canada, South Africa, Spain, Mexico, and Taiwan. The outage, which lasted 20 minutes, was first observed around 8:15 AM EST and initially appeared to center on Cogent nodes located in Washington, D.C., Los Angeles, CA, and Dallas, TX. Around ten minutes after first being observed, the nodes located in Washington, D.C., and Dallas, TX appeared to clear. Around five minutes later the nodes experiencing outage conditions expanded to include nodes in Dallas, TX, San Francisco, CA, and Phoenix, AZ. This increase in the number of nodes and locations exhibiting outage conditions appeared to coincide with an increase in the number of impacted regions, downstream partners, and customers. A further five minutes later, nodes located in Phoenix, AZ and Dallas, TX, appeared to clear, leaving only the nodes located in San Francisco, CA, and Los Angeles, CA, exhibiting outage conditions. The outage was resolved around 8:40 AM EST. Click here for an interactive view.

    Internet report for Feb. 10-16

    ThousandEyes reported 398 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of Feb. 10-16. That’s up 13% from 353 outages the week prior. Specific to the U.S., there were 196 outages, which is down 7% from 210 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages increased from 173 to 205 outages, an 18% increase compared to the week prior. In the U.S., ISP outages increased slightly from 86 to 88, a 2% increase.
    • Public cloud network outages: Globally, cloud provider network outages decreased from 124 to 96 outages. In the U.S., cloud provider network outages decreased from 96 to 69 outages.
    • Collaboration app network outages: Globally, collaboration application network outages increased to one outage. In the U.S., levels remained at zero for the third week in a row. 

    Two notable outages

    On February 12, GTT Communications, a Tier 1 provider headquartered in Tysons, VA, experienced an outage that impacted some of its partners and customers across multiple regions, including the U.S., Germany, the Dominican Republic, Canada, the U.K., Australia, Mexico, Spain, Singapore, Taiwan, Colombia, and Japan. The outage, which lasted 39 minutes, was first observed around 3:05 AM EST and appeared to initially be centered on GTT nodes located in Washington, D.C.  Around ten minutes into the outage, nodes located in Washington, D.C., were joined by GTT nodes located in New York, NY, and Frankfurt, Germany, in exhibiting outage conditions. This increase in the number of nodes and locations exhibiting outage conditions appeared to coincide with an increase in the number of impacted regions, downstream partners, and customers. A further five minutes later, the nodes located in New York, NY, and Frankfurt, Germany, appeared to clear. The outage was cleared around 3:45 AM EST. Click here for an interactive view.

    On February 12, Lumen, a U.S. based Tier 1 carrier, experienced an outage that affected customers and downstream partners across the U.S. The outage, lasting 40 minutes, was first observed around 3:10 AM EST and appeared to initially be centered on Lumen nodes located in Kansas City, MO. Around 15 minutes after first being observed, the nodes located in Kansas City, MO, were joined by nodes located in Dallas, TX, in exhibiting outage conditions. This increase appeared to coincide with an increase in the number of impacted downstream partners, and customers. The outage was cleared around 3:55 AM EST. Click here for an interactive view.

    Internet report for Feb. 3-9

    ThousandEyes reported 353 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of Feb. 3-9. That’s up 7% from 331 outages the week prior. Specific to the U.S., there were 210 outages, which up 12% from 188 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages increased from 126 to 173 outages, a 37% increase compared to the week prior. In the U.S., ISP outages increased from 65 to 86, a 32% increase.
    • Public cloud network outages: Globally, cloud provider network outages decreased from 144 to 124 outages. In the U.S., however, cloud provider network outages increased from 88 to 96 outages.
    • Collaboration app network outages: Both globally and in the U.S., collaboration application network outages remained at zero for the second week in a row.

    Two notable outages

    On February 5, Lumen, a U.S. based Tier 1 carrier, experienced an outage that affected customers and downstream partners across multiple regions including the U.S., Canada, and Singapore. The outage, lasting a total of 35 minutes over a forty-five-minute period, was first observed around 3:30 AM EST and appeared to initially be centered on Lumen nodes located in Seattle, WA. Around five minutes into the outage, the nodes located in Seattle, WA were joined by nodes located in Los Angeles, CA, in exhibiting outage conditions. This increase in the number and location of nodes exhibiting outage conditions appeared to coincide with the peak in terms of the number of impacted regions, downstream partners, and customers. A further five minutes later, the nodes located in Los Angeles, CA, appeared to clear, leaving only the nodes located in Seattle, WA, in exhibiting outage conditions. The outage was cleared around 4:15 AM EST. Click here for an interactive view.

    On February 6, Internap, a U.S. based cloud service provider, experienced an outage that impacted many of its downstream partners and customers within the U.S. The outage, lasting a total of one hour and 14 minutes, over a one hour and 28-minute period, was first observed around 12:15 AM EST and appeared to be centered on Internap nodes located in Boston, MA. The outage was at its peak around one hour and 10 minutes after being observed, with the highest number of impacted partners, and customers. The outage was cleared around 1:45 AM EST. Click here for an interactive view.

    Internet report for Jan. 27-Feb. 2

    ThousandEyes reported 331 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of Jan. 27-Feb. 2. That’s down 16% from 395 outages the week prior. Specific to the U.S., there were 188 outages, which down 4% from 195 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages decreased from 199 to 126 outages, a 37% decrease compared to the week prior. In the U.S., ISP outages decreased slightly from 67 to 65, a 3% decrease.
    • Public cloud network outages: Globally, cloud provider network outages increased slightly from 142 to 144 outages. In the U.S., however, cloud provider network outages decreased from 110 to 88 outages.
    • Collaboration app network outages: Both globally and in the U.S., collaboration application network outages dropped down to zero. 

    Two notable outages

    On January 29, Arelion (formerly known as Telia Carrier), a global Tier 1 provider headquartered in Stockholm, Sweden, experienced an outage that impacted customers and downstream partners across multiple regions, including the U.S., Australia, Argentina, Belgium, Bahrain, Germany, France, Brazil, India, Peru, Mexico, and Guatemala. The disruption, which lasted a total of 24 minutes over a 55-minute period, was first observed around 12:40 PM EST and appeared to initially center on nodes located in Dallas, TX, and Ghent, Belgium. Fifteen minutes after appearing to clear, the nodes located in Dallas, TX, began exhibiting outage conditions again. Around 12:20 PM EST, the nodes located in Dallas, TX, were joined by nodes located in Atlanta, GA, in exhibiting outage conditions. This rise in nodes and locations exhibiting outage conditions also appeared to coincide with an increase in the number of downstream customers, partners, and regions impacted. The outage was cleared around 1:35 PM EST. Click here for an interactive view.

    On February 2, Cogent Communications, a multinational transit provider based in the U.S., experienced an outage that affected customers and downstream partners across multiple regions including the U.S., Poland, and Spain. The outage, lasting a total of 22 minutes, was first observed around 3:10 AM EST and appeared to initially center on nodes located in Washington, D.C. Fifteen minutes after first being observed, the nodes located in Washington, D.C., appeared to clear and were replaced by nodes located in Miami, FL, in exhibiting outage conditions. A further five minutes later, the nodes located in Miami, FL, were joined by nodes located in Atlanta, GA, in exhibiting outage conditions. This increase in nodes exhibiting outage conditions appeared to coincide with an increase in the number of impacted downstream partners and customers. The outage was cleared around 3:55 AM EST. Click here for an interactive view.

    Internet report for Jan. 20-26

    ThousandEyes reported 395 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of Jan. 20-26. That’s up 20% from 328 outages the week prior. Specific to the U.S., there were 195 outages, which up 24% from 157 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages increased slightly from 186 to 199 outages, a 7% increase compared to the week prior. In the U.S., ISP outages increased from 53 to 67, a 26% increase.
    • Public cloud network outages: Globally, cloud provider network outages jumped from 76 to 142 outages. In the U.S., cloud provider network outages increased from 69 to 110 outages.
    • Collaboration app network outages: Globally, collaboration application network outages remained unchanged from the week prior, recording 1 outage. In the U.S., collaboration application network outages dropped to zero.

    Two notable outages

    On January 24, Lumen, a U.S. based Tier 1 carrier, experienced an outage that affected customers and downstream partners across multiple regions including the U.S., Italy, Canada, France, India, the U.K., Germany, and the Netherlands. The outage, lasting a total of 37 minutes, over a period of 45 minutes, was first observed around 1:20 AM EST and appeared to be centered on Lumen nodes located in New York, NY.  Around five minutes into the outage, a number of Lumen nodes exhibiting outage conditions in New York, NY, appeared to reduce. This drop in the number of nodes exhibiting outage conditions appeared to coincide with a decrease in the number of impacted downstream partners, and customers. The outage was cleared around 7:05 AM EST. Click here for an interactive view.

    On January 23, AT&T, U.S.-based telecommunications company, experienced an outage on its network that impacted AT&T customers and partners across the U.S. The outage, lasting a total of 13 minutes over a 20-minute period, was first observed around 10:35 AM EST and appeared to center on AT&T nodes located in Dallas, TX. Around 15 minutes after first being observed, the number of nodes exhibiting outage conditions located in Dallas, TX, appeared to reduce. This decrease in nodes exhibiting outage conditions appeared to coincide with a drop in the number of impacted partners and customers. The outage was cleared at around 10:35 AM EST. Click here for an interactive view.

    Internet report for Jan. 13-19

    ThousandEyes reported 328 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of Jan. 13-19. That’s up 11% from 296 outages the week prior. Specific to the U.S., there were 157 outages, which up 34% from 117 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages increased slightly from 182 to 186 outages, a 2% increase compared to the week prior. In the U.S., ISP outages increased from 40 to 53, a 33% increase.
    • Public cloud network outages: Globally, cloud provider network outages increased from 72 to 76 outages. In the U.S., cloud provider network outages increased from 54 to 69 outages.
    • Collaboration app network outages: Globally, and in the U.S., collaboration application network outages dropped from two outages to one.

    Two notable outages

    On January 15, Lumen, a U.S. based Tier 1 carrier (previously known as CenturyLink), experienced an outage that affected customers and downstream partners across multiple regions including the U.S., Hong Kong, Germany, Canada, the U.K., Chile, Colombia, Austria, India, Australia, the Netherlands, Spain, France, Singapore, Japan, South Africa, Nigeria, China, Vietnam, Saudi Arabia, Israel, Peru, Norway, Argentina, Turkey, Hungary, Ireland, New Zealand, Egypt, the Philippines, Italy, Sweden, Bulgaria, Estonia, Romania, and Mexico. The outage, lasting a total of one hour and 5 minutes over a nearly three-hour period, was first observed around 5:02 AM EST and appeared to initially be centered on Lumen nodes located in Dallas, TX.  Around one hour after appearing to clear, nodes located in Dallas, TX, began exhibiting outage conditions again, this time joined by Lumen nodes located in San Jose, CA, Washington, D.C., Chicago. IL, New York, NY, London, England, Los Angeles, CA, San Francisco, CA, Sacramento, CA, Fresno, CA, Seattle, WA, Santa Clara, CA, and Colorado Springs, CO, in exhibiting outage conditions. This increase in the number and location of nodes exhibiting outage conditions appeared to coincide with the peak in terms of the number of regions and downstream partners, and customers impacted. The outage was cleared around 7:25 AM EST. Click here for an interactive view.

    On January 16, Hurricane Electric, a network transit provider headquartered in Fremont, CA, experienced an outage that impacted customers and downstream partners across multiple regions, including the U.S., Malaysia, Singapore, Indonesia, New Zealand, Hong Kong, the U.K., Canada, South Korea, Japan, Thailand, and Germany. The outage, lasting 22 minutes, was first observed around 2:28 AM EST and initially appeared to center on Hurricane Electric nodes located in Chicago, IL. Five minutes into the outage, the nodes located in Chicago, IL, were joined by Hurricane Electric nodes located in Portland, OR, Seattle, WA, and Ashburn, VA, in exhibiting outage conditions. This coincided with an increase in the number of downstream partners and countries impacted. Around 12 minutes into the outage, all nodes, except those located in Chicago, IL, appeared to clear. The outage was cleared at around 2:55 AM EST. Click here for an interactive view.

    Internet report for Jan. 6-12

    ThousandEyes reported 296 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of Jan. 6. That’s double the number of outages the week prior (148). Specific to the U.S., there were 117 outages, which up 50% from 78 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages increased from 80 to 182 outages, a 127% increase compared to the week prior. In the U.S., ISP outages increased from 25 to 40, a 60% increase.
    • Public cloud network outages: Globally, cloud provider network outages increased from 34 to 72 outages. In the U.S., cloud provider network outages increased from 31 to 54 outages.
    • Collaboration app network outages: Globally, and in the U.S., there were two collaboration application network outages, up from one a week earlier.

    Two notable outages

    On January 8, Cogent Communications, a multinational transit provider based in the U.S., experienced an outage that impacted multiple downstream providers and customers across various regions, including the U.S., India, Canada, Mexico, Singapore, South Africa, Indonesia, Sweden, the U.K., Honduras, Japan, Vietnam, Thailand, Poland, the Netherlands, Australia, the Philippines, Greece, Germany, Argentina, New Zealand, France, Malaysia, Taiwan, and Colombia. The outage lasted a total of one hour and nine minutes, distributed across a series of occurrences over a period of three hours and 50 minutes. The first occurrence of the outage was observed around 6:00 AM EST and initially seemed to be centered on Cogent nodes located in Los Angeles, CA. Around three hours and 20 minutes after first being observed, nodes in Los Angeles, CA, began exhibiting outage conditions again, this time accompanied by nodes in Chicago, IL, El Paso, TX, and San Jose, CA. This increase in nodes experiencing outages appeared to coincide with a rise in the number of affected downstream customers, partners, and regions. Five minutes later, the nodes located in Chicago, IL, and El Paso, TX, appeared to clear, leaving only the nodes in Los Angeles, CA, and San Jose, CA, exhibiting outage conditions. The outage was cleared around 9:50 AM EST. Click here for an interactive view.

    On January 10, Lumen, a U.S. based Tier 1 carrier (previously known as CenturyLink), experienced an outage that affected customers and downstream partners across multiple regions including Switzerland, South Africa, Egypt, the U.K., the U.S., Spain, Portugal, Germany, the United Arab Emirates, France, Hong Kong, and Italy The outage, lasting a total of 19 minutes, was first observed around 9:05 PM EST and appeared to be centered on Lumen nodes located in London, England, and Washington, D.C. Around twenty-five minutes from when the outage was first observed, the nodes located in London, England, appeared to clear, leaving only Lumen nodes located in Washington, D.C. exhibiting outage conditions. This drop in the number of nodes and locations exhibiting outage conditions appeared to coincide with a decrease in the number of impacted downstream partners, and customers. The outage was cleared around 9:55 PM CET. Click here for an interactive view.

    Internet report for Dec. 30, 2024-Jan. 5, 2025

    ThousandEyes reported 148 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of Dec. 30, 2024. That’s up 95% from 76 outages the week prior. Specific to the U.S., there were 78 outages, which up nearly threefold from 28 outages the week prior. Here’s a breakdown by category:

    • ISP outages: Globally, total ISP outages increased from 46 to 80 outages, a 74% increase compared to the week prior. In the U.S., ISP outages increased from 10 to 25, a 150% increase.
    • Public cloud network outages: Globally, cloud provider network outages increased from 18 to 34 outages. In the U.S., cloud provider network outages increased from 13 to 31 outages.
    • Collaboration app network outages: There was one collaboration application network outage globally and in the U.S., which is an increase from zero in the previous week.

    Two notable outages

    On December 30, Neustar, a U.S. based technology service provider headquartered in Sterling, VA, experienced an outage that impacted multiple downstream providers, as well as Neustar customers within multiple regions, including the U.S., Mexico, Taiwan, Singapore, Canada, the U.K., Spain, Romania, Germany, Luxembourg, France, Costa Rica, Ireland, Japan, India, Hong Kong, and the Philippines. The outage, lasting a total of one hour and 40 minutes, was first observed around 2:00 PM EST and appeared to initially center on Neustar nodes located in Los Angeles, CA, and Washington, D.C. Around 10 minutes into the outage, nodes located in Washington, D.C., were replaced by nodes located in Ashburn, VA, in exhibiting outage conditions. Around 10 minutes later, nodes located in Virginia, VA, and Los Angeles, CA, appeared to clear and were replaced by nodes located in Dallas, TX and San Jose, CA, in exhibiting outage conditions. Five minutes later, these nodes were replaced by nodes located in London, England, Ashburn, VA, New York, NY, and Washington, D.C. A further five minutes later, these nodes were joined by nodes located in Dallas, TX, in exhibiting outage conditions. This increase in nodes exhibiting outage conditions also appeared to coincide with an increase in the number of downstream partners and regions impacted. The outage was cleared around 3:40 PM EST. Click here for an interactive view.

    On January 4, AT&T experienced an outage on their network that impacted AT&T customers and partners across multiple regions including the U.S., Ireland, the Philippines, the U.K., France, and Canada. The outage, lasting around 23 minutes, was first observed around 3:35 AM EST, appearing to initially center on AT&T nodes located in Phoenix, AZ, Los Angeles, CA, San Jose, CA, and New York, NY. Around ten minutes into the outage, nodes located in Phoenix, AZ, and San Jose, CA, appeared to clear, leaving just nodes located in Los Angeles, CA, and New York, NY, exhibiting outage conditions. This decrease in nodes exhibiting outage conditions appeared to coincide with a drop in the number of impacted partners and customers. The outage was cleared at around 4:00 AM EST. Click here for an interactive view.


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → HomeFi

  • Major network vendors team to advance Ethernet for scale-up AI networking

    Major network vendors team to advance Ethernet for scale-up AI networking

    As AI networking technology blossoms, yet another group has formed to make sure Ethernet can handle the stress.

    AMD, Arista, ARM, Broadcom, Cisco, HPE Networking, Marvell, Meta, Microsoft, Nvidia, OpenAI and Oracle have joined the new Ethernet for Scale-Up Networking (ESUN) initiative, which promises to advance the networking technology to handle scale-up connectivity across accelerated AI infrastructure. ESUN was formed by the nonprofit Open Compute Project, which is hosting its 2025 OCP Global Summit this week in San Jose, Calif.

    “AI workloads are re-shaping modern data center architectures, and networking solutions must evolve to meet the growing demands,” wrote Martin Lund, executive vice president of Cisco’s common hardware group, in a blog post about the news. “ESUN brings together AI infrastructure operators and vendors to align on open standards, incorporate best practices, and accelerate innovation in Ethernet solutions for scale-up networking.”

    ESUN will focus solely on open, standards-based Ethernet switching and framing for scale-up networking—excluding host-side stacks, non-Ethernet protocols, application-layer solutions, and proprietary technologies. The group will expand the development and interoperability of XPU network interfaces and Ethernet switch ASICs for scale-up networks, the OCP stated in a blog: “The Initial focus will be on L2/L3 Ethernet framing and switching, enabling robust, lossless, and error-resilient single-hop and multi-hop topologies.”

    Importantly, OCP says ESUN will actively engage with other organizations looking to advance Ethernet for AI networks, such as the Ultra-Ethernet Consortium (UEC), and long-standing IEEE 802.3 Ethernet to align open standards, incorporate best practices, and accelerate innovation.

    AMD, Arista, Broadcom, Cisco, Eviden, HPE, Intel, Meta and Microsoft originally formed the UEC in 2023 — which now has more than 75 members — with the goal to bring together industry leaders to build a complete Ethernet-based communication stack architecture for high-performance networking.

    Another multivendor development group, the Ultra Accelerator Link (UALink) consortium, recently published its first specification aimed at delivering an open standard interconnect for AI clusters. The UALink 200G 1.0 Specification was crafted by many of the group’s 75 members — which include AMD, Broadcom, Cisco, Google, HPE, Intel, Meta, Microsoft and Synopsys — and lays out the technology needed to support a maximum data rate of 200 Giga transfers per second (GT/s) per channel or lane between accelerators and switches between up to 1,024 AI computing pods, UALink stated. 

    ESUN will leverage the work of IEEE and UEC for Ethernet when possible, stated Arista’s CEO Jayshree Ullal and chief development officer Hugh Holbrook in a blog post about ESUN. To that end, Ullal and Holbrook described a modular framework for Ethernet scale-up with three key building blocks:

    1. Common Ethernet headers for Interoperability: ESUN will build on top of Ethernet to enable the widest range of upper-layer protocols and use cases.
    2. Open Ethernet data link layer: Provides the foundation for AI collectives with high-performance at XPU cluster scale. By selecting standards-based mechanisms (such as Link-Layer Retry (LLR), Priority-based Flow Control (PFC) and Credit-based Flow Control (CBFC), ESUN enables cost-efficiency and flexibility with performance for these networks. Even minor delays can stall thousands of concurrent operations.
    3. Ethernet PHY layer: By relying on the ubiquitous Ethernet physical layer, interoperability across multiple vendors and a wide range of optical and copper interconnect options is assured.

    “ESUN is designed to support any upper layer transport, including one based on SUE-T. SUE-T (Scale-Up Ethernet Transport) is a new OCP workstream, seeded by Broadcom’s contribution of SUE (Scale-Up Ethernet) to OCP. SUE-T looks to define functionality that can be easily integrated into an ESUN-based XPU for reliability scheduling, load balancing, and transaction packing, which are critical performance enhancers for some AI workloads,” Ullal and Holbrook wrote.

    “In essence, the ESUN framework enables a collection of individual accelerators to become a single, powerful AI super computer, where network performance directly correlates to the speed and efficiency of AI model development and execution,” Ullal and Holbrook wrote. “The layered approach of ESUN and SUE-T over Ethernet promotes innovation without fragmentation. XPU accelerator developers retain flexibility on host-side choices such as access models (push vs. pull, and memory vs streaming semantics), transport reliability (hop-by-hop vs. end-to-end), ordering rules, and congestion control strategies while retaining system design choices. The ESUN initiative takes a practical approach for iterative improvements.”

    Gartner expects gains in AI networking fabrics

    Scale-up AI fabrics (SAIF) have captured a lot of industry attention lately, according to Gartner. The research firm is forecasting massive growth in SAIF to support AI infrastructure initiatives through 2029. The vendor landscape will remain dynamic over the next two years, with multiple technology ecosystems emerging, Gartner wrote in its report, What are “Scale-Up” AI Fabrics and Why Should I Care?

    “Scale-Up” AI fabrics (SAIF) provide high-bandwidth, low-latency physical network interconnectivity and enhanced memory interaction between nearby AI processors,” Garter wrote. “Current implementations of SAIF are vendor-proprietary platforms, and there are proximity limitations (typically, SAIF is confined to only a rack or row). In most scenarios, Gartner recommends using Ethernet when connecting multiple SAIF systems together. We believe the scale, performance and supportability of Ethernet is optimal.”

    “From 2025 through 2027, we expect major shifts in this technology, including traction for Nvidia’s SAIF offering and other SAIF options. As of mid-2025, this technology segment remains dominated by Nvidia, who is evolving and expanding its Nvlink technology to partners such as Marvell, Fujitsu, Qualcomm and Astera Labs to directly integrate with NVIDIA’s SAIF offering (branded as Nvidia NVlink Fusion),” Gartner stated.

    However, competing ecosystems are emerging, including UALink and others, and the result of these initiatives creates the potential for a multivendor ecosystem, greater flexibility and reduced lock-in, leading to a more competitive environment, Gartner wrote.


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → PaternityLab