SkyWatchMesh – UAP Intelligence Network

UAP Intelligence Network – Real-time monitoring of official UAP reports from government agencies and scientific institutions worldwide

Category: UAP Intelligence

  • Arm joins Open Compute Project to build next-generation AI data center silicon

    Arm joins Open Compute Project to build next-generation AI data center silicon

    Chip designer Arm Holdings plc has announced it is joining the Open Compute Project to help address rising energy demands from AI-oriented data centers.

    Arm said it plans to support companies in developing the next phase of purpose-built silicon and packaging for converged infrastructure. The company said that to build this next phase of infrastructure requires co-designed capabilities across compute, acceleration, memory, storage, networking and beyond.

    The new converged AI data center won’t be built like before, with separate CPU, GPU, networking and memory. They will feature increased density through purpose-built in-package integration of multiple chiplets using 2.5D and 3D technologies, according to Arm.

    Arm is addressing this by contributing the Foundation Chiplet System Architecture (FCSA) specification to the Open Compute Project. FCSA leverages Arm’s ongoing work with the Arm Chiplet System Architecture (CSA) but addresses industry demand for a framework that aligns to vendor- and CPU architecture-neutral requirements.

    To power the next generation of converged datacenters, Arm is contributing its new Foundation Chiplet System Architecture specification to the OCP and broadening the Arm Total Design ecosystem.

    The benefits for OEM partners are power efficiency and custom design of the processors, said Mohamed Awad, senior vice president and general manager of infrastructure business at Arm. “For anybody building a data center, the specific challenge that they’re running into is not really about the dollars associated with building, it’s about keeping up with the [power] demand,” he said.

    Keeping up with the demand comes down to performance, and more specifically, performance per watt. With power limited, OEMs have become much more involved in all aspects of the system design, rather than pulling silicon off the shelf or pulling servers or racks off the shelf.

    “They’re getting much more specific about what that silicon looks like, which is a big departure from where the data center was ten or 15 years ago. The point here being is that they look to create a more optimized system design to bring the acceleration closer to the compute, and get much better performance per watt,” said Awad.

    The Open Compute Project is a global industry organization dedicated to designing and sharing open-source hardware configurations for data center technologies and infrastructure. It covers everything from silicon products to rack and tray design.  It is hosting its 2025 OCP Global Summit this week in San Jose, Calif.

    Arm also was part of the Ethernet for Scale-Up Networking (ESUN) initiative announced this week at the Summit that included AMD, Arista, Broadcom, Cisco, HPE Networking, Marvell, Meta, Microsoft, and Nvidia. ESUN promises to advance Ethernet networking technology to handle scale-up connectivity across accelerated AI infrastructures.

    Arm’s goal by joining OCP is to encourage knowledge sharing and collaboration between companies and users to share ideas, specifications and intellectual property. It is known for focusing on modular rather than monolithic designs, which is where chiplets come in.

    For example, customers might have multiple different companies building a 64-core CPU and then choose IO to pair it with, whether like PCIe or an NVLink. They then choose their own memory subsystem, deciding whether to go HBM, LPDDR, or DDR. It’s all mix and match like Legos, Awad said.

    “What this model allows for is the sort of selection of those components in a differentiation where it makes sense without having to redo all the other aspects of the system which are which are effectively common across multiple different designs,” said Awad.


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → Ecovacs

    FTC Disclosure: This post contains affiliate links. We may earn a commission if you purchase through these links at no additional cost to you. See our Affiliate Disclosure for details.

  • The business case for microsegmentation: Lower insurance costs, 33% faster ransomware response

    The business case for microsegmentation: Lower insurance costs, 33% faster ransomware response

    Network segmentation has been a security best practice for decades, yet for many reasons, not all network deployments have fully embraced the approach of microsegmentation. With ransomware attacks becoming increasingly sophisticated and cyber insurance underwriters paying closer attention to network architecture, microsegmentation is transitioning from nice to have to business imperative.

    New research from Akamai examines how organizations are approaching microsegmentation adoption, implementation challenges, and the tangible benefits they’re seeing. The data reveals a significant gap between awareness and execution, but it also shows clear financial and operational incentives for network teams willing to make the transition. Key findings from Akamai’s Segmentation Impact Study, which surveyed 1,200 security and technology leaders worldwide, include:

    • Only 35% of organizations have implemented microsegmentation across their network environment despite 90% having adopted some form of segmentation.
    • Organizations with more than $1 billion in revenue saw ransomware containment time reduced by 33% after implementing microsegmentation.
    • 60% of surveyed organizations received lower insurance premiums tied to segmentation maturity.
    • 75% of insurers now assess segmentation posture during underwriting.
    • Network complexity (44%), visibility gaps (39%) and operational resistance (32%) remain the primary barriers to adoption.
    • Half of non-adopters plan to implement microsegmentation within two years, while 68% of current users expect to increase investment.

    “I believe the biggest surprise in the data was the effectiveness of microsegmentation when used as a tool for containing breaches,” Garrett Weber, field CTO for enterprise security at Akamai, told Network World. “We often think of segmentation as a set-it-and-forget-it solution, but with microsegmentation bringing the control points to the workloads themselves, it offers organizations the ability to quickly contain breaches.”

    Why traditional segmentation falls short

    Microsegmentation applies security policies at the individual workload or application level rather than at the network perimeter or between large network zones.

    Weber challenged network admins who feel their current north-south segmentation is adequate. “I would challenge them to really try and assess and understand the attacker’s ability to move laterally within the segments they’ve created,” he said. “Without question they will find a path from a vulnerable web server, IoT device or endpoint that can allow an attacker to move laterally and access sensitive information within the environment.”

    The data supports this assessment. Organizations implementing microsegmentation reported multiple benefits beyond ransomware containment. These include protecting critical assets (74%), responding faster to incidents (56%) and safeguarding against internal threats (57%).

    Myths and misconceptions about microsegmentation

    The report detailed a number of reasons why organizations have not properly deployed microsegmentation. Network complexity topped the list of implementation barriers at 44%, but Weber questioned the legitimacy of that barrier.

    “Many organizations believe their network is too complex for microsegmentation, but once we dive into their infrastructure and how applications are developed and deployed, we typically see that microsegmentation solutions are a better fit for complex networks than traditional segmentation approaches,” Weber said. “There is usually a misconception that microsegmentation solutions are reliant on a virtualization platform or cannot support a variety of cloud or kubernetes deployments, but modern microsegmentation solutions are built for simplifying network segmentation within complex environments.”

    Another common misconception is that implementing microsegmentation solutions will impact performance of applications and potentially create outages from poor policy creation. “Modern microsegmentation solutions are designed to minimize performance impacts and provide the proper workflows and user experiences to safely implement security policies at scale,” Weber said.

    Insurance benefits create business case

    Cyber insurance has emerged as an unexpected driver for microsegmentation adoption. The report states that 85% of organizations using microsegmentation find audit reporting easier. Of those, 33% reported reduced costs associated with attestation and assurance. More significantly, 74% believe stronger segmentation increases the likelihood of insurance claim approval.

    For network teams struggling to justify the investment to leadership, the insurance angle can provide concrete financial benefits: 60% of surveyed organizations said they received premium reductions as a result of improved segmentation posture.

    Beyond insurance savings and faster ransomware response, Weber recommends network admins track several operational performance indicators to demonstrate ongoing value.

    Attack surface reduction of critical applications or environments can provide a clear security posture metric. Teams should also monitor commonly abused ports and services like SSH and Remote Desktop. The goal is tracking how much of that traffic is being analyzed and controlled by policy.

    For organizations integrating microsegmentation into SOC playbooks, time to breach identification and containment can offer a direct measure of incident response improvement.

    AI can help ease adoption

    Since it’s 2025, no conversation about any technology can be complete without mention of AI. For its part, Akamai is investing in AI to help improve the user experience with microsegmentation. 

    Weber outlined three specific areas where AI is improving the microsegmentation experience. First, AI can automatically identify and tag workloads. It does this by analyzing traffic patterns, running processes and other data points. This eliminates manual classification work.

    Second, AI assists in recommending security policies faster and with more granularity than most network admins and application owners can achieve manually. This capability is helping organizations implement policies at scale.

    Third, natural language processing through AI assistants helps users mine and understand the significant amount of data microsegmentation solutions collect. This works regardless of their experience level with the platform.

    Implementation guidance

    According to the survey, 50% of non-adopters plan to implement microsegmentation within the next 24 months. For those looking to implement microsegmentation effectively, the report outlines four key steps :

    • Achieve deep, continuous visibility: Map workloads, applications and traffic patterns in real time to surface dependencies and risks before designing policies
    • Design policies at the workload level: Apply fine-grained controls that limit lateral movement and enforce zero-trust principles across hybrid and cloud environments
    • Simplify deployment with scalable architecture: Adopt solutions that embed segmentation into existing infrastructure without requiring a full network redesign
    • Strengthen governance and automation: Align segmentation with security operations and compliance goals, using automation to sustain enforcement and accelerate maturity

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → EHarmony

    FTC Disclosure: This post contains affiliate links. We may earn a commission if you purchase through these links at no additional cost to you. See our Affiliate Disclosure for details.

  • Electrifying Everything Will Require Multiphysics Modeling

    Electrifying Everything Will Require Multiphysics Modeling

    A prototyping problem is emerging in today’s efforts to electrify everything. What works as a lab-bench mockup breaks in reality. Harnessing and safely storing energy at grid scale and in cars, trucks, and planes is a very hard problem that simplified models sometimes can’t touch.

    “In electrification, at its core, you have this combination of electromagnetic effects, heat transfer, and structural mechanics in a complicated interplay,” says Bjorn Sjodin, senior vice president of product management at the Stockholm-based software company COMSOL.

    COMSOL is an engineering R&D software company that seeks to simulate not just a single phenomenon—for instance, the electromagnetic behavior of a circuit—but rather all the pertinent physics that needs to be simulated for developing new technologies in real-world operating conditions.

    Engineers and developers gathered in Burlington, Mass. on 8-10 Oct. for COMSOL’s annual Boston conference, where they discussed engineering simulations via multiple simultaneous physics packages. And multiphysics modeling, as the emerging field is called, has emerged as a component of electrification R&D that is becoming more than just nice-to-have.

    “Sometimes, I think some people still see simulation as a fancy R&D thing,” says Niloofar Kamyab, a chemical engineer and applications manager at COMSOL. “Because they see it as a replacement for experiments. But no, experiments still need to be done, though experiments can be done in a more optimized and effective way.”

    Can Multiphysics Scale Electrification?

    Multiphysics, Kamyab says, can sometimes be only half the game.

    “I think when it comes to batteries, there is another attraction when it comes to simulation,” she says. “It’s multi-scale—how batteries can be studied across different scales. You can get in-depth analysis that, if not very hard, I would say is impossible to do experimentally.”

    In part, this is because batteries reveal complicated behaviors (and runaway reactions) at the cell level but also in unpredictable new ways at the battery-pack level as well.

    “Most of the people who do simulations of battery packs, thermal management is one of their primary concerns,” Kamyab says. “You do this simulation so you know how to avoid it. You recreate a cell that is malfunctioning.” She adds that multiphysics simulation of thermal runaway enables battery engineers to safely test how each design behaves in even the most extreme conditions—in order to stop any battery problems or fires before they could happen.

    Wireless charging systems are another area of electrification, with their own thermal challenges. “At higher power levels, localized heating of the coil changes its conductivity,” says Nirmal Paudel, a lead engineer at Veryst Engineering, an engineering consulting firm based in Needham, Mass. And that, he notes, in turn can change the entire circuit as well as the design and performance of all the elements that surround it.

    Electric motors and power converters require similar simulation savvy. According to electrical engineer and COMSOL senior application engineer Vignesh Gurusamy, older ways of developing these age-old electrical workhorse technologies are proving less useful today. “The recent surge in electrification across diverse applications demands a more holistic approach as it enables the development of new optimal designs,” Gurusamy says.

    And freight transportation: “For trucks, people are investigating, Should we use batteries? Should we use fuel cells?” Sjodin says. “Fuel cells are very multiphysics friendly—fluid flow, heat transfer, chemical reactions, and electrochemical reactions.”

    Lastly, there’s the electric grid itself. “The grid is designed for a continuous supply of power,” Sjodin says. “So when you have power sources [like wind and solar] shutting off and on all the time, you have completely new problems.”

    Multiphysics in Battery and Electric Motor Design

    Taking such an all-in approach to engineering simulations can yield unanticipated upsides as well, says Kamyab. Berlin-based automotive engineering company IAV, for example, is developing powertrain systems that integrate multiple battery formats and chemistries in a single pack. Sodium ion cannot give you the energy that lithium ion can give,” Kamyab says. “So they came up with a blend of chemistries, to get the benefits of each, and then designed a thermal management that matches all the chemistries.”

    Jakob Hilgert, who works as a technical consultant at IAV, recently contributed to a COMSOL industry case study. In it, Hilgert described the design of a dual-chemistry battery pack that combines sodium-ion cells with a more costly lithium solid-state battery.

    Hilgert says that using multiphysics simulation enabled the IAV team to play the two chemistries’ different properties off of each other. “If we have some cells that can operate at high temperatures and some cells that can operate at low temperatures, it is beneficial to take the exhaust heat of the higher-running cells to heat up the lower-running cells, and vice versa,” Hilgert said. “That’s why we came up with a cooling system that shifts the energy from cells that want to be in a cooler state to cells that want to be in a hotter state.”

    According to Sjodin, IAV is part of a larger trend in a range of industries that are impacted by the electrification of everything. “Algorithmic improvements and hardware improvements multiply together,” he says. “That’s the future of multiphysics simulation. It will allow you to simulate larger and larger, more realistic systems.”

    According to Gurusamy, GPU accelerators and surrogate models allow for bigger jumps in electric motor capabilities and efficiencies. Even seemingly simple components like the windings of copper wire in a motor core (called stators) provide parameters that multiphysics can optimize.

    “A primary frontier in electric motor development is pushing power density and efficiency to new heights, with thermal management emerging as a key challenge,” Gurusamy says. “Multiphysics models that couple electromagnetic and thermal simulations incorporate temperature-dependent behavior in stator windings and magnetic materials.”

    Simulation is also changing the wireless charging world, Paudel says. “Traditional design cycles tweak coil geometry,” he says. “Today, integrated multiphysics platforms enable exploration of new charging architectures,” including flexible charging textiles and smart surfaces that adapt in real-time.

    And batteries, according to Kamyab, are continuing a push toward higher power densities and lower price points. Which is changing not just the industries where batteries are already used, like consumer electronics and EVs. Higher-capacity batteries are also driving new industries like electric vertical take-off and landing aircraft (eVTOLs).

    “The reason that many ideas that we had 30 years ago are becoming a reality is now we have the batteries to power them,” Kamyab says. “That was the bottleneck for many years. … And as we continue to push battery technology forward, who knows what new technologies and applications we’re making possible next.”


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → Ecovacs

  • Oct. 16, 1975: The first GOES satellite launches

    A joint project of NASA and the National Oceanic and Atmospheric Administration (NOAA), the Geostationary Operational Environmental Satellites (GOES) program provides continuous monitoring of weather both on Earth and in space. The GOES satellites map lightning activity, measure and image atmospheric conditions, and track solar activity and space weather. This constant flow of data isContinue reading “Oct. 16, 1975: The first GOES satellite launches”

    The post Oct. 16, 1975: The first GOES satellite launches appeared first on Astronomy Magazine.


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → contabo

  • Scalar Physics, Alien Messages, and Bottomless Holes: Bizarre but Thought-Provoking Conspiracies!

    Scalar Physics, Alien Messages, and Bottomless Holes: Bizarre but Thought-Provoking Conspiracies!

    Scalar Physics, Alien Messages, and Bottomless Holes: Bizarre but Thought-Provoking Conspiracies!

    The conspiracy world is a strange place to step into. It is often a strange and potentially dangerous mix of obscured partial truths, bizarre claims that border on the preposterous, and muddied and distorted facts, statistics, and statements. Perhaps because of this, it is also a world where individuals or groups can hijack or even outright invent conspiracies for their own ends, a situation that potentially affects us all.


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → Ecovacs

  • Oracle’s big bet for AI: Zettascale10

    Oracle’s big bet for AI: Zettascale10

    Oracle Cloud Infrastructure (OCI) is not just going all-in on AI, but on AI at incredible scale.

    This week, the company announced what it calls the largest AI supercomputer in the cloud, OCI Zettascale10. The multi-gigawatt architecture links hundreds of thousands of Nvidia GPUs to deliver what OCI calls “unprecedented” performance.

    The supercomputer will serve as the backbone for the ambitious, yet somewhat embattled, $500 billion Stargate project.

    “The platform offers benefits such as accelerated performance, enterprise scalability, and operational efficiency attuned towards the needs of industry-specific AI applications,” Yaz Palanichamy, senior advisory analyst at Info-Tech Research Group, told Network World.

    How Oracle’s new supercomputer works

    Oracle’s new supercomputer stitches together hundreds of thousands of Nvidia GPUs across multiple data centers, essentially forming multi-gigawatt clusters. This allows the architecture to deliver up to 10X more zettaFLOPS of peak performance: An “unprecedented” 16 zettaFLOPS, the company claims.

    To put that in perspective, a zettaFLOP (a 1 followed by 21 zeroes) can perform one sextillion FLOPS (floating point operations per second), allowing systems to perform intensely complex computations, like those calculated by the most advanced AI and machine learning (ML) systems. That compares to computers working at gigaflop (1 followed by 9 zeroes) or exaFLOP (1 followed by 18 zeroes) speeds.

    “OCI Zettascale10 was designed with the goal of integrating large-scale generative AI use cases, including training and running large language models,” said Info-Tech’s Palanichamy.

    Oracle also introduced new capabilities in Oracle Acceleron, its OCI networking stack, that it said helps customers run workloads more quickly and cost-effectively. They include dedicated network fabrics, converged NICs, and host-level zero-trust packet routing that Oracle says can double network and storage throughput while cutting latency and cost.

    Oracle’s zettascale supercomputer is built on the Acceleron RoCE (RDMA over Converged Ethernet) architecture and Nvidia AI infrastructure. This allows it to deliver what Oracle calls “breakthrough” scale, “extremely low” GPU-to-GPU latency, and improved price/performance, cluster use, and overall reliability.

    The new architecture has a “wide, shallow, resilient” fabric, according to Oracle, and takes advantage of switching capabilities built into modern GPU network interface cards (NICs). This means it can connect to multiple switches at the same time, but each switch stays on its own isolated network plane.

    Customers can thus deploy larger clusters, faster, while running into fewer stalls and checkpoint restarts, because traffic can be shifted to different network planes and re-routed when the system encounters unstable or contested paths.

    The architecture also features power-efficient optics and is “hyper-optimized” for density, as its clusters are located in large data center campuses within a two-kilometer radius, Oracle said.

    “The highly-scalable custom design maximizes fabric-wide performance at gigawatt scale while keeping most of the power focused on compute,” said Peter Hoeschele, VP for infrastructure and industrial compute at OpenAI.

    OCI is now taking orders for OCI Zettascale10, which will be available in the second half of 2026. The company plans to offer multi-gigawatt deployments, initially targeting those with up to 800,000 Nvidia GPUs.

    But is it really necessary?

    While this seems like an astronomical amount of compute, “there are customers for it,” particularly envelope-pushing companies like OpenAI, said Alvin Nguyen, a senior analyst at Forrester.

    He pointed out that most AI models have been trained on text, essentially at this point comprising “all of human-written history.” Now, though, systems are ingesting large and compute-heavy files including images, audio, and video. “Inferencing is expected to grow even bigger than the training steps,” he said.

    And, ultimately, it does take a while for new AI factories/systems like OCI Zettascale10 to be produced at volume, he noted, which could lead to potential issues. “There is a concern in terms of what it means if [enterprises] don’t have enough supply,” said Nguyen. However, “a lot of it is unpredictable.”

    Info-Tech’s Palanichamy agreed that fears are ever-present around large-scale GPU procurement, but pointed to the Oracle-AMD partnership announced this week, aimed at achieving next-gen AI scalability.

    “It is a promising next step for safeguarding and balancing extreme scale in GPU demand, alongside enabling energy efficiency for large-scale AI training and inference,” he said.

    Advice to enterprises who can’t afford AI factories: ‘Get creative’

    Nguyen pointed out that, while OpenAI is a big Oracle partner, the bulk of the cloud giant’s customers aren’t research labs, they’re everyday enterprises that don’t necessarily need the latest and greatest.

    Their more modest requirements offer those customers an opportunity to identify other ways to improve performance and speed, such as by simply updating software stacks. It’s also a good time for them to analyze their supply chain and supply chain management capabilities.

    “They should be making sure they’re very aware of their supply chain, vendors, partners, making sure they can get access to as much as they can,” Nguyen advised.

    Not many companies can afford their own AI mega-factories, he pointed out, but they can take advantage of mega-factories owned and run by others. Look to partners, pursue other cloud options, and get creative, he said.

    There is no doubt that, as with the digital divide, there is a growing “AI divide,” said Nguyen. “Not everyone is going to be Number One, but you don’t have to be. It’s being able to execute when that opportunity arises.”


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → Ecovacs

  • Oracle’s big bet for AI: Zettascale10

    Oracle’s big bet for AI: Zettascale10

    Oracle Cloud Infrastructure (OCI) is not just going all-in on AI, but on AI at incredible scale.

    This week, the company announced what it calls the largest AI supercomputer in the cloud, OCI Zettascale10. The multi-gigawatt architecture links hundreds of thousands of Nvidia GPUs to deliver what OCI calls “unprecedented” performance.

    The supercomputer will serve as the backbone for the ambitious, yet somewhat embattled, $500 billion Stargate project.

    “The platform offers benefits such as accelerated performance, enterprise scalability, and operational efficiency attuned towards the needs of industry-specific AI applications,” Yaz Palanichamy, senior advisory analyst at Info-Tech Research Group, told Network World.

    How Oracle’s new supercomputer works

    Oracle’s new supercomputer stitches together hundreds of thousands of Nvidia GPUs across multiple data centers, essentially forming multi-gigawatt clusters. This allows the architecture to deliver up to 10X more zettaFLOPS of peak performance: An “unprecedented” 16 zettaFLOPS, the company claims.

    To put that in perspective, a zettaFLOP (a 1 followed by 21 zeroes) can perform one sextillion FLOPS (floating point operations per second), allowing systems to perform intensely complex computations, like those calculated by the most advanced AI and machine learning (ML) systems. That compares to computers working at gigaflop (1 followed by 9 zeroes) or exaFLOP (1 followed by 18 zeroes) speeds.

    “OCI Zettascale10 was designed with the goal of integrating large-scale generative AI use cases, including training and running large language models,” said Info-Tech’s Palanichamy.

    Oracle also introduced new capabilities in Oracle Acceleron, its OCI networking stack, that it said helps customers run workloads more quickly and cost-effectively. They include dedicated network fabrics, converged NICs, and host-level zero-trust packet routing that Oracle says can double network and storage throughput while cutting latency and cost.

    Oracle’s zettascale supercomputer is built on the Acceleron RoCE (RDMA over Converged Ethernet) architecture and Nvidia AI infrastructure. This allows it to deliver what Oracle calls “breakthrough” scale, “extremely low” GPU-to-GPU latency, and improved price/performance, cluster use, and overall reliability.

    The new architecture has a “wide, shallow, resilient” fabric, according to Oracle, and takes advantage of switching capabilities built into modern GPU network interface cards (NICs). This means it can connect to multiple switches at the same time, but each switch stays on its own isolated network plane.

    Customers can thus deploy larger clusters, faster, while running into fewer stalls and checkpoint restarts, because traffic can be shifted to different network planes and re-routed when the system encounters unstable or contested paths.

    The architecture also features power-efficient optics and is “hyper-optimized” for density, as its clusters are located in large data center campuses within a two-kilometer radius, Oracle said.

    “The highly-scalable custom design maximizes fabric-wide performance at gigawatt scale while keeping most of the power focused on compute,” said Peter Hoeschele, VP for infrastructure and industrial compute at OpenAI.

    OCI is now taking orders for OCI Zettascale10, which will be available in the second half of 2026. The company plans to offer multi-gigawatt deployments, initially targeting those with up to 800,000 Nvidia GPUs.

    But is it really necessary?

    While this seems like an astronomical amount of compute, “there are customers for it,” particularly envelope-pushing companies like OpenAI, said Alvin Nguyen, a senior analyst at Forrester.

    He pointed out that most AI models have been trained on text, essentially at this point comprising “all of human-written history.” Now, though, systems are ingesting large and compute-heavy files including images, audio, and video. “Inferencing is expected to grow even bigger than the training steps,” he said.

    And, ultimately, it does take a while for new AI factories/systems like OCI Zettascale10 to be produced at volume, he noted, which could lead to potential issues. “There is a concern in terms of what it means if [enterprises] don’t have enough supply,” said Nguyen. However, “a lot of it is unpredictable.”

    Info-Tech’s Palanichamy agreed that fears are ever-present around large-scale GPU procurement, but pointed to the Oracle-AMD partnership announced this week, aimed at achieving next-gen AI scalability.

    “It is a promising next step for safeguarding and balancing extreme scale in GPU demand, alongside enabling energy efficiency for large-scale AI training and inference,” he said.

    Advice to enterprises who can’t afford AI factories: ‘Get creative’

    Nguyen pointed out that, while OpenAI is a big Oracle partner, the bulk of the cloud giant’s customers aren’t research labs, they’re everyday enterprises that don’t necessarily need the latest and greatest.

    Their more modest requirements offer those customers an opportunity to identify other ways to improve performance and speed, such as by simply updating software stacks. It’s also a good time for them to analyze their supply chain and supply chain management capabilities.

    “They should be making sure they’re very aware of their supply chain, vendors, partners, making sure they can get access to as much as they can,” Nguyen advised.

    Not many companies can afford their own AI mega-factories, he pointed out, but they can take advantage of mega-factories owned and run by others. Look to partners, pursue other cloud options, and get creative, he said.

    There is no doubt that, as with the digital divide, there is a growing “AI divide,” said Nguyen. “Not everyone is going to be Number One, but you don’t have to be. It’s being able to execute when that opportunity arises.”

    More Oracle news


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → hotel-deals

    FTC Disclosure: This post contains affiliate links. We may earn a commission if you purchase through these links at no additional cost to you. See our Affiliate Disclosure for details.

  • Three options for wireless power in the enterprise

    Wireless connectivity is standard across corporate campuses, in warehouses and factories, and even in remote locations. But what about wireless power? Can we get rid of all cables?

    Many of us are already using wireless chargers for our cellphones and other devices. But induction chargers aren’t a complete solution since they require very close proximity between the device and the charging station. So, what can enterprises try? Some organizations are deploying midrange solutions that use radio signals to transmit power through the air, and others are even experimenting with laser power transmission.

    Here are three emerging options for wireless power in the enterprise, and top use cases to consider.

    Induction charging

    Induction charging is about more than saving users the two seconds it would take them to plug a cord into their device. It can also be used to power vehicles, such as factory vehicles, or even cars and buses. Also known as near-field charging, it’s the single largest sector of the global wireless power market, according to Coherent Market Insights, accounting for 89% of 2025’s $16.6 billion wireless power market. Of that, consumer electronics accounted for 74%.

    Detroit launched its first electric roadway in 2023, allowing vehicles to charge their batteries wirelessly when they park on, or drive over, that particular section of the road. It requires special equipment to be installed in the vehicle, and it can be pricy for individual cars, but it can be a useful option for buses or delivery vans. The city plans to add a second segment next year, reports the American Society of Civil Engineers.

    The first commercial user will be UPS, which will also add stationary wireless charging at its Detroit facility. “This innovative approach will revolutionize how we power our electric vehicles and drive fleet electrification forward,” said Dakota Semler, CEO and co-founder of electric vehicle manufacturer Xos, in a press release.

    Florida plans to open an electrified road in 2027, and, in California, UCLA is testing an in-road inductive charging system for campus buses that is planned to be in operation by 2028. The goal is to have the project ready in time for the 2028 Olympics Games in Los Angeles.

    Utah plans to add in-motion charging lanes to streets in the next ten years, and the first one scheduled to be installed later this year, as part of its electrified transportation action plan. A major impetus is the 2034 Winter Olympics, which will be held in Salt Lake City.

    Early adopters in Utah include Utah PaperBox and Boise Cascade’s Salt Lake distribution hub. There’s also an electrified roadway, currently in the pilot and development phase, at the Utah Inland Port, which will provide in-motion charging for freight vehicles. Construction of the world’s first one-megawatt wireless charging station has already begun at this facility, which will provide 30-minute fast charging to parked electric semi trucks.

    Europe is even further ahead. Sweden began working on the first electric road in 2018. In 2021, the one-mile stretch of electrified road was able to charge two commercial vehicles simultaneously, even though they had different battery systems and power requirements. In 2022, an electric bus began operating regularly on the road, charging while driving over it.

    The idea is that wireless in-motion charging will allow commercial vehicles to spend more time on the road and less time parked at charging stations — and less wasted time driving to and from the charging stations. It also allows vehicles to have smaller batteries and wider ranges. If the technology goes mainstream on public roads, drivers would be able to pay for the electricity they get in a way similar to how the E-Z Pass system works. But a more immediate application of the technology is the way that UPS is deploying — to charge up vehicles in a corporate facility.

    There are several vendors that offer this technology:

    • HEVO offers wireless charging pads for garages and parking lots for both residential and commercial markets.
    • Plugless Power is another company offering wireless charging for parked vehicles, and claims to have provided 1 million charge hours to its customers, which include Google and Hertz. It provided the first wireless charging stations for Tesla Model S cars, and its wireless charging system for driverless shuttlebuses was the first of its kind in Europe.
    • WAVE offers wireless charging system for electric buses, and its Salt Lake City depot can charge multiple vehicles automatically using inductive power. In addition to buses, other uses cases include ports such as the Port of Los Angeles, and warehouse and distribution. In warehouses, it can provide power to electric yard trucks, forklifts, and other equipment.
    • InductEV offers high-power, high-speed wireless charging for commercial vehicles such as city buses, auto fleets and industrial vehicles, with on-route wireless charging solutions deployed in North America and Europe. It was named one of Time magaaine’s best inventions of 2024. Seattle’s Sound Transit plans to have nearly half of its electric buses being charged by on-route wireless chargers from InductEV, and municial bus charging is already operational in Indianapolis, Martha’s Vineyard, and Oregon. The AP Moeller Maersk Terminal in Port Elizabeth, NJ is also using the company’s wireless chargers for its electric port tractors.

    Other companies offering wireless charging for industrial vehicles such as automated guided vehicles and material handling robots are Daihen, WiTricity, and ENRX.

    Meanwhile, cellphone charging pad-style wireless chargers also have plenty of business applications other than ease of use. Mojo Mobility, for example, offers charging systems designed to work in sterile medical environments.

    Ambient IoT and medium-range charging

    The most common type of ambient IoT is that powered by solar cells, where no power transmission is required at all. For example, ambient IoT is already reshaping agriculture, with solar-powered sensors placed in fields, greenhouses, and livestock areas, according to a September report from Omdia. Small devices can also be powered by motion or body heat.

    Transmitted wireless power, however, is more predictable and reliable and can work in a wider variety of environments — as long as the device is within range of the power transmitter or has a battery backup for when it’s not. Medium-range charging can work at a distance of a few inches to several yards or more. The less power the device requires, and the bigger its antenna, the longer the distance it can support.

    “It’s really pushing IoT to the next level,” says Omdia analyst Shobhit Srivastava.

    One popular use case is for sensors that are placed in locations where it’s not convenient to change batteries, he says, such as logistics. For example, Wiliot’s IoT Pixel is a postage stamp-sized sticker powered by radio waves that works at a range of up to 30 feet. Sold in reels, the price is as low as 10 cents a sensor when bought in bulk. The sensors can monitor temperature, location, and humidity and communicate this information to a company network via Bluetooth.

    Sensors such as these can be attached to pallets to track its location, says Srivastava. “People in Europe are very conscious about where their food is coming from and, to comply with regulations, companies need to have sensors on the pallets,” he says. “Or they might need to know that meat has been transported at proper temperatures.” The smart tags can just be slapped on pallets, he says. “This is a very cheap way to do this, even with millions of pallets moving around,” he says.

    The challenge, Srivastava says, is that when the devices are moving from trucks to logistics hubs, to warehouses, and to retail stores, “they need to connect to different technologies.”

    Plus, all this data needs to be collected and analyzed. Some sensor manufacturers also offer cloud-based platforms to do this — and charge extra for the additional services.

    One wireless power company, Energous, is doing just that, with an end-to-end ambient IoT platform consisting of wirelessly powered sensors, RF-based energy transmitters, and cloud-based monitoring software. Their newest product, the e-Sense Tag, was announced in June. The company has sold over 15,000 transmitters, says Giampaolo Marino, senior vice president of strategy and business development, and includes two Fortune 10 companies — one in retail IoT and one in logistics and transportation — among its customers.

    The new tags will cost around $5 each, though the price is subject to change as the product is commercialized, Marino says. It’s a bit pricier than the disposable tags that cost under $1 each. But they will last for years, he adds, and can be reprogrammed.

    “Three years ago, it was science fiction,” Marino says. “Today, it’s something we’re deploying.” It’s similar to how we went from cable internet to Wi-Fi everywhere, he says.

    One use case that we’re not seeing yet for this kind of medium-range power transmission is factory robots. “We are far away from that,” says Omdia’s Srivastava. “The use cases are for low-power devices only.”

    Similarly, smartphones are also energy-hungry devices with their big displays and other components that draw power, he says. “So, smartphones won’t be ambient powered in any hear future,” he says. “But small wearables, like wristbands in a hospital, can be ambient powered.”

    Like a warehouse, a hospital is a controlled physical location where power stations can be installed to provide power to the IoT devices, enabling a wide variety of applications, such as monitoring heart rates, respiration, and other key health metrics.

    Who’s in charge of wireless power networks?

    Is wireless power transmission a networking task that falls within the purview of the IT department, or is it handled on the operational or business unit level? According to Srivastava, that depends on the scale of the deployment. “It’s a smaller deployment, with one or two locations to track, it might just say with, say, the logistics team,” he says.

    But for larger deployments, with thousands of devices, ambient IoT is about more than just the power — there’s also the data transmission. “Then the network and security teams should be involved,” he says.

    Other issues that might come up beyond data security include electromagnetic interference and regulatory compliance for RF exposure.

    According to Omdia’s Srivstava and Energous, some of the notable vendors in the space are: Everactive (wireless ambient IoT devices); Wiliot (battery-free IoT pixel tags); HaiLa Technologies (low power wireless semiconductor); ONiO (self-powered, batteryless solutions); Atma.io from Avery Dennison atma.io (connected product cloud); EnOcean SmartStudio (sensor and data management); SODAQ (low power hardware platforms); Lightricity (integration of energy-harvesting solutions into IoT systems); SML Group (retail RFID solution integrators); Sequans (integration of cellular IoT connectivity into ambient IoT systems); Powercast (both inductive and radio power transmission); Ossia (RF power using the FCC-approved Cota wireless power standard); and Minew (Bluetooth bridge and gateway to support Wiliot IoT Pixels).

    Laser charging

    For longer distances, lasers are the way to go.

    Lasers can be used to power drones and other aerial craft or collect power from remote wind turbines. It can also be used to send power to cell towers in areas where power cables are impractical to deploy.

    In May, DARPA achieved a new wireless power transmission record, recording more than 800 watts of power at a distance of over five miles. This technology can even collect power from space-based solar collectors and beam it down to Earth. In fact, it’s a bit easier to beam power up and down since there’s less atmosphere to get in the way. Caltech’s Space Solar Power Project demonstrated this in 2023.

    In space, there are no day-night cycles, no seasons, and no cloud cover, meaning that solar panels can yield eight times more power than they can down on Earth. The idea is to collect power, transform it into electricity, convert it to microwaves, and transmit it down to where it’s needed, including locations that have no access to reliable power.

    In April, startup Aetherflux announced a $50 million funding round and plans to have its first low-Earth orbit test in 2026. China is currently working on a “Three Gorges dam in space” project, which will use super heavy rockets to create a giant solar power station in space, according to the South China Morning Post.

    The European Space Agency is expected to make a decision at the end of this year on proceeding with its own space-based solar power project, called SOLARIS.

    The same technology can also be used to transmit power from one satellite to another, and we’re already seeing a race to build a power grid in outer space.

    Star Catcher Industries plans to build a space-based network of solar power collectors that will concentrate solar energy and then transmit it to other satellites, meaning that companies will be able to send up more powerful satellites without expanding their physical footprint. On-the-ground testing was conducted earlier this year, and the first in-orbit test will take place in 2026.

    “Demand is growing exponentially for small satellites that can do more, from onboard processing to extended-duration operations,” said Chris Biddy, CEO at satellite manufacturer Astro Digital, which became Star Catcher’s first customer in September.


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → EHarmony

  • IBM unveils advanced quantum computer in Spain

    IBM unveils advanced quantum computer in Spain

    The Guipuzcoan city of San Sebastián is now operating the IBM Quantum System Two quantum computer, the most advanced of its kind that IBM has developed commercially to date. This is the first installation of its kind in Europe and the second in the world—the first was installed in the Japanese city of Kobe before the summer—not counting the equipment IBM has in its own quantum labs in New York.

    The IBM computer, which integrates a 156-qubit quantum chip (called Heron), is located in the new building of the Ikerbasque scientific foundation, which was also inaugurated on October 14, and from where the new equipment will be given access to more than 20 research centers and more than 30 companies, including the energy giant Iberdrola. All of them are attached to the Basque Quantum (BasQ) program promoted by the Basque Government, which includes an investment of more than 153 million euros in the promotion of quantum computing and science and an entire ecosystem designed to generate wealth and attract talent in the long term.

    “Today is not just the inauguration of an extraordinary machine,” said Juan Ignacio Pérez Iglesias, Minister of Science, Universities and Innovation of the Basque Government, at the presentation of the new computer, held in San Sebastián. Computerworld attended the presentation at the invitation of IBM. “Behind this announcement is a whole strategy, working with scientific, technological and business stakeholders, and with the help of IBM, to develop an entire ecosystem around quantum computing.” For Pérez Iglesias, “this is a founding moment.” A vision shared by the Lehendakari himself, Imanol Pradales, who emphasized at the event that the regional government’s focus on “cultivating and preserving science for decades has been key to IBM choosing us as a partner and traveling companion among the dozens of offers they had in quantum computing.”

    For the President of the Basque Government, the region’s quantum strategy (BasQ) “allows us to be a magnet for the generation of advanced knowledge and talent and also to align ourselves with the EU’s resilience and re-industrialization strategy.”

    Quantum computing, classical computing, and AI

    Jay Gambetta, current director of IBM Research and also present at the inauguration, emphasized that with this announcement, the company is “closer to the quantum advantage” it hopes to achieve by 2026, thanks, as Horacio Morell, president of IBM Spain, also pointed out, to the combination of new quantum computing with classical computing and artificial intelligence. “The combination of the three,” he asserted, “will allow us to tackle problems that have been intractable until now.”

    After indicating that, thanks to this project, “quantum computing is becoming a reality in Spain today, and the focus will now be on translating it into applications and greater competitiveness for industry,” Morell reviewed the Blue Giant’s roadmap for this emerging technology, which, in addition to pursuing the aforementioned quantum advantage next year, also aims to launch the first commercial quantum computer on a scale and capable of error correction, i.e., without the famous quantum “noise,” on the market in 2029. “In addition to placing us [as a country] on the quantum computing map, this project will be a legacy for our society,” he noted.

    Executives at the IBM-Euskadi Quantum Computational Center in San Sebastián, Spain

    IBM executives and officials from the Basque Government and regional councils in front of Europe’s first IBM Quantum System Two, located at the IBM-Euskadi Quantum Computational Center in San Sebastián, Spain.

    Irekia

    Adolfo Morais, Deputy Minister of Science and Innovation of the Basque Government, explained to the press present at the event that the use of the new quantum machine in combination with other classical supercomputing systems, which will be modernized shortly, and artificial intelligence solutions will surely be a reality in 2027. “At the Euskadi Quantum Computational Center of Ikerbasque, we are already thinking about setting up a more modern supercomputer to replace the current Hyperion next year, so that in two years we will be able to use the three types of technology in combination.”

    “We don’t envision quantum computing working independently, just as we don’t envision classical computing working independently in the future,” emphasized Mikel Díez, director of Quantum Computing at IBM Spain, confirming that the new computer works in conjunction with classical computing architecture. “The purpose of our quantum computing proposal is for it to work in conjunction with classical computing,” he emphasized.  

    The machine, he explained, is a modular architecture that, for now, has a single quantum chip, but more can be added. It takes up almost an entire room and must be kept at a temperature of -273 degrees Celsius, guaranteed by a pump cooling system. “It consumes kilowatts, not megawatts, because the qubits barely require any energy; in this sense, it’s very different from large classical supercomputers, which require much more energy,” Díez added.

    Practical applications of an emerging technology

    Quantum computing, combined with classical supercomputing and increasingly powerful AI tools, is expected to disrupt not only the academic world but also various productive sectors. As Mikel Díaz himself recalls in an interview with Computerworld, the Basque Government’s BasQ program contemplates three types of initiatives or projects that will work with quantum technology. “The first are related to the evolution of quantum technology itself: how to continue improving error correction, how to identify components of quantum computers, and how to optimize both these and the performance of these devices.”

    In this sense, as Díez himself acknowledges to this newspaper, “it’s true that the computer we’re inaugurating today in San Sebastián is a ‘noisy’ computer, and this, in some ways, still limits certain features.” Specifically, according to the IBM executive, the Quantum System Two has a rate of one error per thousand operations performed with a qubit. “Although it’s a very, very small rate, we’re aware that it can lead to situations where the result isn’t entirely guaranteed. What are we doing at this current moment? Post-processing the results we obtain and correcting possible errors.” Díez emphasizes that this will be done for the duration of this transition period until the arrival of a fault-tolerant quantum machine, as classical computers have been for years.

    Another type of project to which quantum computing will be applied, from a more scientific perspective, is the behavior of materials or time crystals. Finally, he explains, there is a third line related to the application of this technology in industry. “For example, we are exploring how to improve investment portfolios for the banking sector, optimize the energy grid, or explore logistics problems.”

    IBM Quantum System Two

    The Basque Government and IBM unveil the first IBM Quantum System Two in Europe at the IBM-Euskadi Quantum Computational Center in San Sebastián, Spain.

    IBM

    Currently, according to Adolfo Morais of the Basque Government, “50% of the quantum computing capacity is already being used by the scientific sector. We hope that the remaining 50% will be used by other scientific institutions, as well as by private companies and public bodies.” Along these lines, he added, both the three provincial councils and the Basque Executive have programs to accelerate use cases. “We not only want to attract Basque companies and entities, but the project has a global scope. In the coming weeks, we will announce how to apply for access to these quantum services,” he stated, emphasizing that the selection of projects will be based on their quality.

    Morais also emphasized that the collaborative framework between Ikerbasque and IBM has never been about “merely acquiring a device.” The contract, which amounts to €80 million—the figure initially announced was over €50 million—includes the acquisition of the quantum computer, “the most expensive component,” but also the implementation of other research and training initiatives. In fact, thanks to this agreement, 150 people have already been trained in this technology.

    This feature originally appeared on Computerworld Spain.


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → Aiper

  • Q&A: IBM’s Mikel Díez on hybridizing quantum and classical computing

    Q&A: IBM’s Mikel Díez on hybridizing quantum and classical computing

    The Basque city of San Sebastián is beginning to play an interesting leading role in the radically different field of quantum technology. The official launch of IBM Quantum System Two, the most advanced quantum computer of the ‘Blue Giant,’ happened in Donostia-San Sebastián. This infrastructure, which the technology company has implemented on the main campus of the Ikerbasque scientific foundation in Gipuzkoa, aims to solve problems that have remained unsolvable in combination with classical computing.

    The installation in San Sebastián is the first of its kind in Europe and the second worldwide, after Kobe, Japan. It stems from a 2023 strategic agreement between IBM and the Basque Government that brought the technology company’s advanced quantum machine to Spain. As Mikel Díez, director of Quantum Computing at IBM Spain, explains to Computerworld, the system hybridizes quantum and classical computing to leverage the strengths of both. “At IBM, we don’t see quantum computing working alone, but rather alongside classical computing so that each does what it does best,” he says.

    Is IBM Quantum System Two, launched today in San Sebastián, fully operational?

    Yes, with today’s inauguration, IBM Quantum System Two is now operational. This quantum computer architecture is the latest we have at IBM and the most powerful in terms of technological performance. From now on, we will be deploying all the projects and initiatives we are pursuing with the ecosystem on this computer, as IBM’s participation in this program is not exclusively infrastructure-based, but also involves promoting joint collaboration, research, training, and other programs.

    There are academic experts who argue that there are no 100% quantum computers yet, and there’s a lot of marketing from technology companies. Is this new quantum computer real?

    Back in 2019, we launched the first quantum computer available and accessible on our cloud. More than 30,000 people connected that day; since then, we’ve built more than 60 quantum computers, and as we’ve evolved them, we currently have approximately 10 operating remotely from our cloud in both the United States and Europe. We provide access, from both locations, to more than 500,000 developers. Furthermore, we’ve executed more than 3 trillion quantum circuits. This quantum computer, the most advanced to date, is a reality, it’s tangible, and it allows us to explore problems that until now couldn’t be solved. However, classical infrastructure is also needed to solve these problems. We don’t envision quantum computing going it alone, but rather working alongside classical computing so that each does what it does best. What do quantum computers do best? Well, exploring information maps, and with absolutely demanding and exponential amounts of data.

    So IBM’s proposal, in the end, is a hybrid of classical computing with quantum computing.

    Correct. But, I repeat, quantum computers exist, we have them physically. In fact, the one we’re inaugurating today is the first of its kind in Europe, the second in the world.

    This hybrid proposal isn’t really a whim; it’s done by design. For example, when we need to simulate how certain materials behave to demand the best characteristics from them, this process is designed with an eye to what we want to simulate on classical computers and what we want to simulate on quantum computers, so that the sum of the two is greater than two. Another example is artificial intelligence, for which we must identify patterns within a vast sea of ​​data. This must be done from the classical side but also where it doesn’t reach, in the quantum side, so that the results of the latter converge throughout the entire artificial intelligence process. That’s the hybridization we’re seeking. In any case, I insist, in our IBM Quantum Network, we have more than 300 global organizations, private companies, public agencies, startups, technology centers, universities running real quantum circuits.

    And, one clarification. Back in 2019, when we launched our first quantum computer, with between 5 and 7 qubits, what we could attempt to do with that capacity could be perfectly simulated on an ordinary laptop. After the advances of these years, being able to simulate problems requiring more than 60 or 70 qubits with classical technology is not possible even on the largest classical computer in the world. That’s why what we do on our current computers, with 156 qubits, is run real quantum circuits. They’re not simulated: they run real circuits to help with artificial intelligence problems, optimization of simulation of materials, emergence of models all that kind of thing.

    What kinds of things? What projects are being promoted with this new infrastructure?

    The Basque Government’s BasQ program includes three types of initiatives or projects. The first are related to the evolution of quantum technology itself: how to continue improving error correction, how to identify components of quantum computers, and how to optimize both these and the performance of these devices. From a more scientific perspective, we are working on how to represent the behavior of materials so that we can improve the resistance of polymers, for example. This is useful in aeronautics to improve aircraft suspension. We are also working on time crystals, which, from a scientific perspective, seek to improve precision, sensor control, and metrology. Finally, a third line relates to the application of this technology in industry; for example, we are exploring how to improve the investment portfolio for the banking sector, how to optimize the energy grid , and how to explore logistics problems.

    What were the major challenges in launching the machine you’re inaugurating today? Why did you choose the Basque Country to implement your second Quantum System Two?

    Before implementing a facility of this type in a geographic area, we assess whether it makes sense based on four main pillars. First, whether the area has the capacity, technological expertise , talent and workforce, a research and science ecosystem, and, finally, an industrial fabric. I recall that IBM currently has more than 40 quantum innovation centers around the world, and this is one of them, with the difference that this is the first to have a machine in Europe.

    When evaluating the Basque Country option, we saw that the Basque Government already had supercomputing facilities, giving them technological experience in managing these types of facilities from a scientific perspective. They also had a scientific policy in place for decades, which, incidentally, had defined quantum physics as one of its major lines of work. They had long-standing talent creation, attraction, and retention policies with universities. And, finally, they had an industrial network with significant expertise in digitalization technologies, artificial intelligence, and industrial processes that require technology. In other words, the Basque Country option met all the requirements.

    He said the San Sebastián facility is the same as the one they’ve implemented in Japan. So what does IBM have in Germany?

    What we have in Germany is a quantum data center, similar to our cloud data centers , but focused on serving organizations that don’t have a dedicated computer on-site for their ecosystem. But in San Sebastián, as in Kobe (Japan), there’s an IBM System Two machine with a modular architecture and a 156-qubit Heron processor.

    Just as we have a quantum data center in Europe to provide remote service, we have another one in the United States, where we also have our quantum laboratory, which is where we are building, in addition to the current system (System Two), the one that we will have ready in 2029, which will be fault-tolerant.

    And this one we can say is a quantum computer at scale and fault-tolerant.

    Look, computers are computers, they’re real, and they’re real. The nuance may come from their capabilities, and it’s true that the one we’re inaugurating today in San Sebastián is a noisy computer, and this, in some ways, still limits certain features.

    IBM’s roadmap for quantum computing is as follows. First, by 2026, just around the corner, we hope to discover the quantum advantage, which will come from using existing real physical quantum computers alongside classical computers for specific processes. That is, we’re not just focused on whether it’s useful to have a quantum computer to run quantum circuits. Rather, as I mentioned before, by applying the capabilities of real quantum computers, alongside classical ones, we’ll gain an advantage in simulating new materials, simulating research into potential new drugs, optimizing processes for the energy grid , or for financial investment portfolios.

    The second major milestone will come in 2029, when we expect to have a fault-tolerant machine with 200 logical qubits commercially available. The third milestone is planned for 2033, when we will have a fault-tolerant machine with 2,000 logical qubits—that’s 10 times more logical qubits, meaning we’ll be able to perform processing at scale without the capacity limitations that exist now, and qubits without fault tolerance.

    You mentioned earlier that current quantum computers, including the one in San Sebastián, are noisy. How does this impact the projects you intend to support?

    What we, and indeed the entire industry, are looking for when we talk about the capacity of a quantum computer is processing speed, processing volume, and accuracy rate. The latter is related to errors. The computer we inaugurated here has a rate that is on the threshold of one error for every thousand operations we perform with a qubit. Although it’s a very, very small rate, we are aware that it can lead to situations where the result is not entirely guaranteed. What are we doing at this current time? Post-processing the results we obtain and correcting possible errors. Obviously, this is a transitional stage; what we want is for these errors to no longer exist by 2029, and for the results to no longer need to be post-processed to eliminate them. We want error correction to be automatic, as is the case with the computers we use today in our daily lives.

    But even today, with machines with these flaws, we are seeing successes: HSBC, using our quantum computing, has achieved a 34% gain in estimating the probability of automated government bond trades being closed.

    So the idea they have is to improve quantum computing along the way, right?

    Exactly. It’s the same as the project to go to the Moon. Although the goal was that, to go to the Moon, other milestones were discovered along the way. The same thing happens with quantum computing; the point is that you have to have a very clear roadmap, but IBM has one, at least until 2033.

    How do you view the progress of competitors like Google, Microsoft, and Fujitsu?

    In quantum computing, there are several types of qubits—the elements that store the information we want to use for processing—and we’re pursuing the option we believe to be the most robust: superconducting technology. Ours are superconducting qubits, and we believe this is a good choice because it’s the most widely recognized option in the industry.

    In quantum computing, it’s not all about the hardware, and in this sense, qubits and what we call the stack of all the levels that must be traversed to reach a quantum computer are important. The important thing is to look at how you are at all those levels, and there, well, there are competitors who work more on one part than another, but, once again, what industries and science value is a supplier having the complete stack because that’s what allows them to conduct experiments and advance applications.

    A question I’ve been asked recently is whether we’ll have quantum computers at home.

    So, will we have them?

    This is similar to what happens with electricity. Our homes receive 220 volts through the low-voltage meter; we don’t need high voltage or large transformers, which are found in large centers on the outskirts of cities. It’s the same in our area: we’ll have data centers with classical and quantum supercomputers, but we’ll be able to see the value in our homes when new materials, improved capabilities of artificial intelligence models, or even better drugs are discovered.

    Speaking of electricity, Iberdrola is one of the companies using the new quantum computer.

    There are various industrial players in the Basque Country’s quantum ecosystem, in addition, of course, to the scientific hub. Iberdrola recently joined the Basque Government’s BasQ program to utilize the capabilities of our computer and the entire ecosystem to improve its business processes and optimize the energy grid. They are interested in optimizing the predictive maintenance of all their assets, including home meters, wind turbines, and the various components in the energy supply chain.

    What other large companies will use the new computer?

    In the Basque ecosystem, we currently have more than 30 companies participating, along with technology and research centers. These are projects that have not yet been publicly announced because they are still under development, although I can mention some entities such as Tecnalia, Ikerlan, the universities of Deusto and Modragón, startups such as Multiverse and Quantum Match.

    How many people are behind the San Sebastián quantum computer project?

    There will be about 400 researchers in the building where the computer is located, although these projects involve many more people.

    Spain has a national quantum strategy in collaboration with the autonomous communities. Is it possible that IBM will bring another quantum machine to another part of the country?

    In principle, the one we have on our roadmap is the computer implemented for the Basque Government. In Andalusia, we have inaugurated a quantum innovation center, but it’s a project that doesn’t have our quantum computer behind it. That is, people in Andalusia will be able to access our European quantum data center [the one in Germany]. In any case, the Basque and Andalusian governments are in contact so that, should they need it, Andalusia can access the quantum computer in San Sebastián.

    What advantages does being able to access a quantum machine in one’s own country bring?

    When we talked earlier about how we want to hybridize quantum computing with classical computing, well, this requires having the two types of computers adjacent to each other, something that happens in the San Sebastián center, because the quantum computer is on one floor and the classical computer will be on the other floor. Furthermore, there are some processes that, from a performance and speed perspective, require the two machines to be very close together.

    And, of course, if you own the machine, you control access: who enters, when, at what speed obviously, in the case of quantum data centers like the one in Germany, you have to go through a queue, and there may be more congestion if people from many European countries enter at the same time.

    On the other hand, and this is relevant, at IBM we believe that having our quantum machines in a third-party facility raises the quality standards compared to having them only in our data centers or laboratories controlled by us.

    Beyond this, having a quantum machine in the country will generate an entire ecosystem around this technology and will be a focal point for attracting talent not only in San Sebastián but throughout Spain.

    Will there be another computer of this type in Europe soon?

    We don’t have a forecast for it at the moment, but who could have imagined four years ago that there would be a computer like this in Spain, and in the Basque Country in particular?

    We’ve talked a lot, but is there anything you’d like to highlight?

    As part of the Basque Government’s quantum program, we’ve trained more than 150 people from different companies, technology centers, startups, and more over the past two years. We’ve also created an ambassador program to help identify specific applications. We seek to reach across the entire ecosystem so that there are people with sufficient knowledge to understand where value is generated by using quantum technology.

    This interview originally appeared on Computerworld Spain.


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → Surfshark