SkyWatchMesh – UAP Intelligence Network

UAP Intelligence Network – Real-time monitoring of official UAP reports from government agencies and scientific institutions worldwide

Category: UAP Intelligence

  • Inside Nvidia’s ‘grid-to-chip’ vision: How Vera Rubin and Spectrum-XGS push toward AI giga-factories

    Inside Nvidia’s ‘grid-to-chip’ vision: How Vera Rubin and Spectrum-XGS push toward AI giga-factories

    Nvidia will be front-and-center at this week’s Global Summit for members of the Open Compute Project (OCP), emphasizing its “grid-to-chip” philosophy.

    The company is making announcements on several fronts, including the debut of Vera Rubin MGX, its next-gen architecture fusing CPUs and GPUs, and Spectrum-XGS Ethernet, a networking fabric designed for “giga-scale” AI factories.

    [ RelatedMore Nvidia news and insights ]

    It’s all part of a bigger play by Nvidia to position itself as a connective tissue throughout the AI tech stack, embedding itself across every layer including chips and networking to full data center infrastructure and software orchestration.

    “Data centers are evolving toward giga scale,” said Nvidia senior product marketing manager Joe Delaere ahead of the event. “AI factories that manufacture intelligence generate revenue, but to maximize that revenue, the networking, the compute, the mechanicals, the power and the cooling, all have to be designed as one.”

    Putting numbers on next-gen Vera Rubin infrastructure

    Nvidia will provide more detailed specifications for its Vera Rubin NVL144 MGX-generation open architecture rack servers at the event — although the servers themselves will not be available until late 2026.

    The Vera Rubin chip architecture is the successor to Nvidia’s Blackwell. It is purpose-built for “massive-context” processing to help enterprises dramatically speed AI projects to market.

    Vera Rubin MGX brings together Nvidia’s Vera CPUs and Rubin CPX GPUs, all using the same open MGX rack footprint as Blackwell. The system allows for numerous configurations and integrations.

    “MGX is a flexible, modular building block-based approach to server and rack scale design,” Delaere said. “It allows our ecosystem to create a wide range of configurations, and do so very quickly.”

    Vera Rubin MGX will deliver almost eight times more performance than Nvidia’s GB 300 for certain types of calculation, he said. The architecture is liquid-cooled and cable-free, allowing for faster assembly and serviceability. Operators can quickly mix and match components such as CPUs, GPUs, or storage, supporting interoperability, Nvidia said.

    Matt Kimball, principal data center analyst at Moor Insights and Strategy, highlighted the modularity and cleanness of the MGX tray design.

    “This simplifies the manufacturing process significantly,” he said. For enterprises managing tens or even hundreds of thousands of racks, “this design enables a level of operational efficiency that can deliver real savings in time and cost.”

    Nvidia is also showing innovation with cooling, Kimball said. “Running cooling to the midplane is a very clean design and more efficient.”

    With electricity supplies under increasing pressure, there’s a new trade-off between the cost of chips and their energy efficiency, making chips like Nvidia’s latest more attractive. Brandon Hoff, research director for enabling technologies at IDC, said, “You get more tokens per watt. That’s kind of where we’re ending up. People have the money, they don’t have the power.”

    Dovetailing with the Vera advances, Nvidia and its partners are gearing up for the 800 VDC era. Moving from traditional 415 VAC or 480 VAC three-phase systems offers data centers increased scalability, improved energy efficiency, reduced materials usage, and higher capacity for performance in data centers, according to Nvidia. The advanced infrastructure necessary has already been adopted by the electric vehicle and solar industries.

    But the transition requires collaboration from all the layers of the stack, and Nvidia is working with more than 20 industry leaders to create a shared blueprint, it said.

    Supporting ‘giga-scale’ AI super-factories

    Along with Vera Rubin MGX, Nvidia will this week introduce Spectrum-XGS Ethernet support for OCP.

    Who wins/loses with the Intel-Nvidia union?

🛸 Recommended Intelligence Resource

As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

→ Ecovacs

  • In a First, Artificial Neurons Talk Directly to Living Cells

    In a First, Artificial Neurons Talk Directly to Living Cells

    The bacteria Geobacter sulfurreducens came from humble beginnings; it was first isolated from dirt in a ditch in Norman, Okla. But now, the surprisingly remarkable microbes are the key to the first ever artificial neurons that can directly interact with living cells.

    The G. sulfurreducens microbes communicate with one another through tiny, protein-based wires that researchers at the University of Massachusetts Amherst harvested and used to make artificial neurons. These neurons can, for the first time, process information from living cells without an intermediary device amplifying or modulating the signals, the researchers say.

    While some artificial neurons already exist, they require electronic amplification to sense the signals our bodies produce, explains Jun Yao, who works on bioelectronics and nanoelectronics at UMass Amherst. The amplification inflates both power usage and circuit complexity, and so counters efficiencies found in the brain.

    The neuron created by Yao’s team can understand the body’s signals at their natural amplitude of around 0.1 volts. This is “highly novel,” says Bozhi Tian, a biophysicist who studies living bioelectronics at the University of Chicago and was not involved in the work. This work “bridges the long-standing gap between electronic and biological signaling” and demonstrates interaction between artificial neurons and living cells that Tian calls “unprecedented.”

    Real neurons and artificial neurons

    Biological neurons are the fundamental building blocks of the brain. If external stimuli are strong enough, charge builds up in a neuron, triggering an action potential, a spike of voltage that travels down the neuron’s body to enable all types of bodily functions, including emotion and movement.

    Scientists have been working to engineer a synthetic neuron for decades, chasing after the efficiency of the human brain, which has so far seemed to escape the abilities of electronics.

    Yao’s group has designed new artificial neurons that mimic how biological neurons sense and react to electrical signals. They use sensors to monitor external biochemical changes and memristors—essentially resistors with memory—to emulate the action-potential process.

    As voltage from the external biochemical events increases, ions accumulate and begin to form a filament across a gap in the memristor—which in this case was filled with protein nanowires. If there is enough voltage, the filament completely bridges the gap. Current shoots through the device and the filament then dissolves, dispersing the ions and stopping the current. The complete process mimics a neuron’s action potential.

    The team tested its artificial neurons by connecting them to cardiac tissue. The devices measured a baseline amount of cellular contraction, which did not produce enough signal to cause the artificial neuron to fire. Then the researchers took another measurement after the tissue was dosed with norepinephrine—a drug that increases how frequently cells contract. The artificial neurons triggered action potentials only during the medicated trial, proving that they can detect changes in living cells.

    The experimental results were published 29 September in Nature Communications.

    Natural nanowires

    The group has G. sulfurreducens to thank for the breakthrough.

    The microbes synthesize miniature cables, called protein nanowires, that they use for intraspecies communication. These cables are charge conductors that survive for long periods of time in the wild without decaying. (Remember, they evolved for Oklahoma ditches.) They’re extremely stable, even for device fabrication, Yao says.

    To the engineers, the most notable property of the nanowires is how efficiently ions move along them. The nanowires offer a low-energy means of transferring charge between human cells and artificial neurons, thus avoiding the need for a separate amplifier or modulator. “And amazingly, the material is designed for this,” says Yao.

    The group developed a method to shear the cables off bacterial bodies, purifying the material and suspending it in a solution. The team laid the mixture out and let the water evaporate, leaving a one-molecule-thin film made from the protein nanowire material.

    This efficiency allows the artificial neuron to yield huge power savings. Yao’s group integrated the film into the memristor at the core of the neuron, lowering the energy barrier for the reaction that causes the memristor to respond to signals recognized by the sensor. With this innovation, the researchers say, the artificial neuron uses one-tenth the voltage and 1/100 the power of others.

    Chicago’s Tian thinks this “extremely impressive” energy efficiency is “essential for future low-power, implantable, and biointegrated computing systems.”

    The power advantages make this synthetic-neuron design attractive for all kinds of applications, the researchers say.

    Responsive wearable electronics, like prosthetics that adapt to stimuli from the body, could make use of these new artificial neurons, Tian says. Eventually, implantable systems that rely on the neurons could “learn like living tissues, advancing personalized medicine and brain-inspired computing” to “interpret physiological states, leading to biohybrid networks that merge electronics with living intelligence,” he says.

    The artificial neurons could also be useful in electronics outside the biomedical field. Millions of them on a chip could replace transistors, completing the same tasks while decreasing power usage, Yao says. The fabrication process for the neurons does not involve high temperatures and utilizes the same kind of photolithography that silicon chip manufacturers do, he says.

    Yao does, however, point out two possible bottlenecks producers could face when scaling up these artificial neurons for electronics. The first is obtaining more of the protein nanowires from G. sulfurreducens. His lab currently works for three days to generate only 100 micrograms of material—about the mass of one grain of table salt. And that amount can coat only a very small device, so Yao questions how this step in the process could scale up for production.

    His other concern is how to achieve a uniform coating of the film at the scale of a silicon wafer. “If you wanted to make high-density small devices, the uniformity of film thickness actually is a critical parameter,” he explains. But the artificial neurons his group has developed are too small to do any meaningful uniformity testing for now.

    Tian doesn’t expect artificial neurons to replace silicon transistors in conventional computing, but instead sees them as a parallel offering for “hybrid chips that merge biological adaptability with electronic precision,” he says.

    In the far future, Yao hopes that such bioderived devices will also be appreciated for not contributing to e-waste. When a user no longer wants a device, they can simply dump the biological component in the surrounding environment, Yao says, because it won’t cause an environmental hazard.

    “By using this kind of nature-derived, microbial material, we can create a greener technology that’s more sustainable for the world,” Yao says.

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → Aiper

  • California’s next big one could be faster and far more destructive

    Supershear earthquakes, moving faster than seismic waves, could cause catastrophic shaking across California. USC researchers warn that many faults capable of magnitude 7 quakes might produce these explosive ruptures. Current construction standards don’t account for their directional force. Stronger monitoring and building codes are urgently needed.

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → contabo

  • Imaging Dark Matter One Clump at a Time

    Imaging Dark Matter One Clump at a Time

    What if you could photograph something completely invisible? To our rather limited eyes that’s what astronomers seem to do all the time with infra red and radio astronomy to name a few. But, astronomers can do this in a rather intriguing way with something that does seem to be truly invisible! A team of astronomers have captured the latest “image” of a dark matter object a million times more massive than our Sun, not by seeing it, but by watching how it warps the light from galaxies billions of light years beyond it. Using an Earth sized telescope network they have revealed one of the smallest dark matter clumps ever found, offering a glimpse into the hidden structure of our universe.

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → roboform

  • 7 new tips and tricks for your iPhone 17 or iPhone Air

    7 new tips and tricks for your iPhone 17 or iPhone Air

    Apple has four new iPhones for 2025: the iPhone 17, iPhone 17 Pro, iPhone 17 Pro Max, and the super-slim iPhone Air (with no number 17 attached). If you’ve picked up one of these, then you’re probably wondering how to get the most out of it, and what you can try that’s new.

    Together with the latest iOS 26 software that comes on board these devices, you’ve got lots to explore—including improvements to the way you take photos, manage calls, and boost battery life. Here are some tricks and tips to get you started with your new iPhone.

    1. Take selfies with Center Stage

    All four new iPhones have a square selfie camera sensor on the front, and that shift in shape means you can snap landscape photos even when you’re holding your phone in the portrait orientation. Even better, the iOS Camera app will automatically recognize when more people join your selfie photo, and expand the frame of view accordingly.

    It’s called Center Stage after the similar feature on iPads and Macs, and you can enable it in the Camera selfie mode by tapping the Center Stage button (the icon looks like a person in a frame). There are two settings you can toggle on or off: Auto Zoom (expands the frame when a face is detected) and Auto Rotate (rotates the frame to fit in more people).

    2. Get your iPhone to screen your calls

    New in iOS 26 is Call Screening, which means that calls from unknown numbers get routed to your own personal answering service. The caller will be asked who they are and what they want, with a text transcript shown on your screen—you can then decide to pick up or not. It’s like an enhanced version of voicemail, which can help you filter out spam calls.

    This won’t happen for contacts who are in your iPhone’s address book, and you can enable and disable the feature as needed. Head to iOS Settings, tap Apps then Phone, and you can choose from three options: Ask Reason for Calling (which is Call Screening), Never (no Call Screening), and Silence (unknown callers go straight to voicemail).

    Get your iPhone to screen your calls for you. Screenshot: Apple

    3. Load up Apple Games

    New in iOS 26 is a central hub for your mobile games called Apple games—and you canfind it across iPadOS 26 and macOS Tahoe 26, so you can keep track of your gaming exploits across multiple devices. As well as launching your current games and checking your progress, you can also discover new titles via personalized recommendations.

    This Apple Games app will appear on every iPhone running iOS 26, but it’s worth noting that the iPhone 17 Pro and Pro Max have a new vapor cooling system installed. In theory, that should mean the most demanding games run more smoothly, while also keeping your phone cooler—so it’s worth loading up some of your more intense games to test it.

    4. Get live translations in your AirPods

    One of the best new features in iOS 26 is Live Translation, and it’s a feature that works really well with Apple AirPods—as long as they’re the AirPods 4 with active noise cancellation, the AirPods Pro 2, or the AirPods Pro 3. When enabled, it means when people talk to you in a foreign language, you get an almost-instant translation in your ears.

    You need to have Apple Intelligence enabled, and the right languages downloaded: Tap your AirPods then Languages in Settings. Next, open the Apple Translate app, tap the Live button at the bottom and choose your languages: Once you tap Start Conversation, you should be able to chat to someone in a different language via your iPhone and AirPods.

    5. Customize the Action Button

    If you’ve got one of the new 2025 iPhones—or an iPhone 15 Pro or Pro Max, or any iPhone 16—then you’ve got access to the Action Button, on the top of the left side as you look at the phone in portrait orientation. One of the first customizations you should consider for your new iPhone is changing what happens when you press and hold on this button.

    By default, the action will switch between Silent and Ring modes, like the traditional switch that the Action Button replaced. However, if you go to iOS Settings and choose Action Button, you’ll see there are several options to swipe between: They include Camera, Visual Intelligence, Voice Memo, Magnifier, Focus, and Translate.

    You’ve got lots of options for the Action Button. Screenshot: Apple

    6. Maximize your iPhone’s battery life

    Unbox and set up your new iPhone and you’ll discover there’s a new battery management option in iOS 26: It’s called Adaptive Power, and you can find it by tapping Battery then Power Mode from Settings. Essentially, it helps manage battery life in the background during demanding tasks, which should mean you get more time between battery charges.

    The mode may shut down some background activities, for example, or slightly dim the display of your iPhone—but all of this happens in the background. On the same screen you still have the standard Low Power Mode toggle switch, which uses several tricks to extend battery life even further (it can be activated manually as well as kicking in automatically).

    7. Make use of the Camera Control button

    All of the new iPhone 17 models, like the iPhone 16 devices before them, have a Camera Control button. If you’re holding your iPhone in its portrait orientation with the screen facing you, Camera Control is the button on the right side, lower down. By default you can press it to launch the Camera app immediately, whatever you’re doing with your phone.
    From there you can press the Camera Control button again to snap a picture, or hold it down to start recording video. Alternatively, do a light double-press on the button, and you get the settings options for that mode, which you can scroll through with a swipe on the Camera Control itself: They include Exposure, Depth, Zoom, Styles, and Tone.

    The post 7 new tips and tricks for your iPhone 17 or iPhone Air appeared first on Popular Science.

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → roboform

  • A New Algorithm Makes It Faster to Find the Shortest Paths

    Ben BrubakerScienceOct 12, 2025 7:00 AMthis story appeared in Quanta Magazine.

    If you want to solve a tricky problem, it often helps to get organized. You might, for example, break the problem into pieces and tackle the easiest pieces first. But this kind of sorting has a cost. You may end up spending too much time putting the pieces in order.

    This dilemma is especially relevant to one of the most iconic problems in computer science: finding the shortest path from a specific starting point in a network to every other point. It’s like a souped-up version of a problem you need to solve each time you move: learning the best route from your new home to work, the gym, and the supermarket.

    “Shortest paths is a beautiful problem that anyone in the world can relate to,” said Mikkel Thorup, a computer scientist at the University of Copenhagen.

    Intuitively, it should be easiest to find the shortest path to nearby destinations. So if you want to design the fastest possible algorithm for the shortest-paths problem, it seems reasonable to start by finding the closest point, then the next-closest, and so on. But to do that, you need to repeatedly figure out which point is closest. You’ll sort the points by distance as you go. There’s a fundamental speed limit for any algorithm that follows this approach: You can’t go any faster than the time it takes to sort.

    Forty years ago, researchers designing shortest-paths algorithms ran up against this “sorting barrier.” Now, a team of researchers has devised a new algorithm that breaks it. It doesn’t sort, and it runs faster than any algorithm that does.

    “The authors were audacious in thinking they could break this barrier,” said Robert Tarjan, a computer scientist at Princeton University. “It’s an amazing result.”

    The Frontier of Knowledge

    To analyze the shortest-paths problem mathematically, researchers use the language of graphs—networks of points, or nodes, connected by lines. Each link between nodes is labeled with a number called its weight, which can represent the length of that segment or the time needed to traverse it. There are usually many routes between any two nodes, and the shortest is the one whose weights add up to the smallest number. Given a graph and a specific “source” node, an algorithm’s goal is to find the shortest path to every other node.

    The most famous shortest-paths algorithm, devised by the pioneering computer scientist Edsger Dijkstra in 1956, starts at the source and works outward step by step. It’s an effective approach, because knowing the shortest path to nearby nodes can help you find the shortest paths to more distant ones. But because the end result is a sorted list of shortest paths, the sorting barrier sets a fundamental limit on how fast the algorithm can run.

    Illustration: Mark Belan, Samuel Velasco/Quanta Magazine

    In 1984, Tarjan and another researcher improved Dijkstra’s original algorithm so that it hit this speed limit. Any further improvement would have to come from an algorithm that avoids sorting.

    In the late 1990s and early 2000s, Thorup and other researchers devised algorithms that broke the sorting barrier, but they needed to make certain assumptions about weights. Nobody knew how to extend their techniques to arbitrary weights. It seemed they’d hit the end of the road.

    “The research stopped for a very long time,” said Ran Duan, a computer scientist at Tsinghua University in Beijing. “Many people believed that there’s no better way.”

    Duan wasn’t one of them. He’d long dreamed of building a shortest-paths algorithm that could break through the sorting barrier on all graphs. Last fall, he finally succeeded.

    Out of Sorts

    Duan’s interest in the sorting barrier dates back nearly 20 years to his time in graduate school at the University of Michigan, where his adviser was one of the researchers who worked out how to break the barrier in specific cases. But it wasn’t until 2021 that Duan devised a more promising approach.

    The key was to focus on where the algorithm goes next at each step. Dijkstra’s algorithm takes the region that it has already explored in previous steps. It decides where to go next by scanning this region’s “frontier”—that is, all the nodes connected to its boundary. This doesn’t take much time at first, but it gets slower as the algorithm progresses.

    Edsger Dijkstra devised a classic algorithm that finds the shortest path from a specific point in a network to every other point.

    Photograph: Hamilton Richards

    Duan instead envisioned grouping neighboring nodes on the frontier into clusters. He would then only consider one node from each cluster. With fewer nodes to sift through, the search could be faster at each step. The algorithm also might end up going somewhere other than the closest node, so the sorting barrier wouldn’t apply. But ensuring that this clustering-based approach actually made the algorithm faster rather than slower would be a challenge.

    Duan fleshed out this basic idea over the following year, and by fall 2022 he was optimistic that he could surmount the technical hurdles. He roped in three graduate students to help work out the details, and a few months later they arrived at a partial solution—an algorithm that broke the sorting barrier for any weights, but only on so-called undirected graphs.

    In undirected graphs, every link can be traversed in both directions. Computer scientists are usually more interested in the broader class of graphs that feature one-way paths, but these “directed” graphs are often trickier to navigate.

    “There could be a case that A can reach B very easily, but B cannot reach A very easily,” said Xiao Mao, a computer science graduate student at Stanford University. “That’s going to give you a lot of trouble.”

    Promising Paths

    In the summer of 2023, Mao heard Duan give a talk about the undirected-graph algorithm at a conference in California. He struck up a conversation with Duan, whose work he’d long admired.

    “I met him for the first time in real life,” Mao recalled. “It was very exciting.”

    After the conference, Mao began thinking about the problem in his spare time. Meanwhile, Duan and his colleagues were exploring new approaches that could work on directed graphs. They took inspiration from another venerable algorithm for the shortest-paths problem, called the Bellman-Ford algorithm, that doesn’t produce a sorted list. At first glance, it seemed like an unwise strategy, since the Bellman-Ford algorithm is much slower than Dijkstra’s.

    “Whenever you do research, you try to take a promising path,” Thorup said. “I would almost call it anti-promising to take Bellman-Ford, because it looks completely like the stupidest thing you could possibly do.”

    Duan’s team avoided the slowness of the Bellman-Ford algorithm by running it for just a few steps at a time. This selective use of Bellman-Ford enabled their algorithm to scout ahead for the most valuable nodes to explore in later steps. These nodes are like intersections of major thoroughfares in a road network.

    “You have to pass through [them] to get the shortest path to a lot of other stuff,” Thorup said.

    In March 2024, Mao thought of another promising approach. Some key steps in the team’s original approach had used randomness. Randomized algorithms can efficiently solve many problems, but researchers still prefer nonrandom approaches. Mao devised a new way to solve the shortest-paths problem without randomness. He joined the team, and they worked together over the following months via group chats and video calls to merge their ideas. Finally, in the fall, Duan realized they could adapt a technique from an algorithm he’d devised in 2018 that broke the sorting barrier for a different graph problem. That technique was the last piece they needed for an algorithm that ran faster than Dijkstra’s on both directed and undirected graphs.

    The finished algorithm slices the graph into layers, moving outward from the source like Dijkstra’s. But rather than deal with the whole frontier at each step, it uses the Bellman-Ford algorithm to pinpoint influential nodes, moves forward from these nodes to find the shortest paths to others, and later comes back to other frontier nodes. It doesn’t always find the nodes within each layer in order of increasing distance, so the sorting barrier doesn’t apply. And if you chop up the graph in the right way, it runs slightly faster than the best version of Dijkstra’s algorithm. It’s considerably more intricate, relying on many pieces that need to fit together just right. But curiously, none of the pieces use fancy mathematics.

    “This thing might as well have been discovered 50 years ago, but it wasn’t,” Thorup said. “That makes it that much more impressive.”

    Duan and his team plan to explore whether the algorithm can be streamlined to make it even faster. With the sorting barrier vanquished, the new algorithm’s runtime isn’t close to any fundamental limit that computer scientists know of.

    “Being an optimist, I would not be surprised if you could take it down even further,” Tarjan said. “I certainly don’t think this is the last step in the process.”

    Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → hotel-deals

  • Closest alien civilization could be 33,000 light years away

    Complex, intelligent life in the galaxy appears vanishingly rare, with the nearest possible civilization perhaps 33,000 light-years distant. Yet despite the odds, scientists insist that continuing the search for extraterrestrial intelligence is essential — for either outcome reshapes our understanding of life itself.

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → EHarmony

  • Solid-State Transformer Design Unlocks Faster EV Charging

    Solid-State Transformer Design Unlocks Faster EV Charging

    This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

    The rapid build-out of fast-charging stations for electric vehicles is testing the limits of today’s power grid. With individual chargers drawing 350 to 500 kilowatts (or more)—which makes EV charging times now functionally equivalent to the fill-up time for a gasoline or diesel vehicle—full charging sites can reach megawatt-scale demand. That’s enough to strain medium-voltage distribution networks—the segment of the grid that links high-voltage transmission lines with the low-voltage lines that serve end users in homes and businesses.

    DC fast-charging stations tend to be clustered in urban centers, along highways, and in fleet depots. Because the load is not spread evenly across the network, particular substations are overworked—even when overall grid capacity is rated to accommodate the load. Overcoming this problem, as more charging stations, with greater power demands, come online requires power electronics that are not only compact and efficient but also capable of managing local storage and renewable inputs.

    Solid-State Transformers in EV Charging

    One of the most promising technologies for modernizing the grid so it can keep up with the demands of vehicle electrification and renewable generation is the solid-state transformer (SST). An SST performs the same basic function as a conventional transformer—stepping voltage up or down. But it does so using semiconductors, high-frequency conversion with silicon carbide or gallium nitride switches, and digital control, instead of passive magnetic coupling alone. An SST’s setup allows it to control power flow dynamically.

    For decades, charging infrastructure has relied on line-frequency transformers (LFTs)—massive assemblies of iron and copper that step down medium-voltage AC to low-voltage AC before or after external conversion from alternating current to the direct current that EV batteries require. A typical LFT can contain as much as a few hundred kilograms of copper windings and a few tonnes of iron. All that metal is costly and increasingly difficult to source. These systems are reliable but bulky and inefficient, especially when energy flows between local storage and vehicles. SSTs are much smaller and lighter than the LFTs they are designed to replace.

    “Our solution achieves the same semiconductor device count as a single-port converter while providing multiple independently controlled DC outputs.” —Shashidhar Mathapati, Delta Electronics

    But most multiport SSTs developed so far have been too complex or costly (between five and 10 times as much as the upfront cost of LFTs). That difference—plus SSTs’ reliance on auxiliary battery banks that add more expense and reduce reliability—explains why solid-state’s obvious benefits have not yet incentivized shifting to the technology from LFTs.

    Surjakanta Mazumder, Saichand Kasicheyanula, Harisyam P.V., and Kaushik Basu hold their SST prototype in a lab.Harisyam P.V., Saichand Kasicheyanula, et al.

    How to Make Solid-State Transformers More Efficient

    In a study published on 20 August in IEEE Transactions on Power Electronics, researchers at the Indian Institute of Science and Delta Electronics India, both in Bengaluru, proposed what’s called a cascaded H-bridge (CHB)–based multiport SST that eliminates those compromises. “Our solution achieves the same semiconductor device count as a single-port converter while providing multiple independently controlled DC outputs,” says Shashidhar Mathapati, the chief technology officer of Delta Electronics. “That means no additional battery storage, no extra semiconductor devices, and no extra medium-voltage insulation.”

    The team built a 1.2-kilowatt laboratory prototype to validate the design, achieving 95.3 percent efficiency at rated load. They also modeled a full-scale 11-kilovolt, 400-kW system divided into two 200-kW ports.

    At the heart of the system is a multiwinding transformer located on the low-voltage side of the converter. This configuration avoids the need for costly, bulky medium-voltage insulation and allows power balancing between ports without auxiliary batteries. “Previous CHB-based multiport designs needed multiple battery banks or capacitor networks to even out the load,” the authors wrote in their paper. “We’ve shown you can achieve the same result with a simpler, lighter, and more reliable transformer arrangement.”

    A new modulation and control strategy maintains a unity power factor at the grid interface, meaning that none of the current coming from the grid goes to waste by oscillating back and forth between the source and the load without doing any work. The SST described by the authors also allows each DC port to operate independently. In practical terms, each vehicle connected to the charger would be able to receive the appropriate voltage and current, without affecting neighboring ports or disturbing the grid connection.

    Using silicon carbide switches connected in series, the system can handle medium-voltage inputs while maintaining high efficiency. An 11-kV grid connection would require just 12 cascaded modules per phase, which is roughly half as many as some modular multilevel converter designs. Fewer modules ultimately means lower cost, simpler control, and greater reliability.

    Although still at the laboratory stage, the design could enable a new generation of compact, cost-effective fast-charging hubs. By removing the need for intermediate battery storage—which adds cost, complexity, and maintenance—the proposed topology could extend the operational lifespan of EV charging stations.

    According to the researchers, this converter is not just for EV charging. Any application that needs medium-voltage to multiport low-voltage conversion—such as data centers, renewable integration, or industrial DC grids—could benefit.

    For utilities and charging providers facing megawatt-scale demand, this streamlined solid-state transformer could help make the EV revolution more grid-friendly, and faster for drivers waiting to charge.

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → contabo

  • Networking terms and definitions

    Networking terms and definitions

    To find a brief definition of the networking term you are looking for user your browser’s “Find” feature then follow links to a fuller explanation.

    AI data center

    Backup-as-a-service (BaaS) is a managed service where a third-party provider stores an organization’s data in the cloud. BaaS is considered well-suited for enterprises looking for A cost-effective way to protect critical assets.  As opposed TO  backups on-premises  — which can require significant infrastructure investments — a BaaS provider maintains backup infrastructure and stores data in a public, private or hybrid cloud environment. Data is continuously backed up, secure and recoverable in the event of an outage, failure or cybersecurity event. 

    5G

    5G is fast cellular wireless technology for enterprise IoT, IIoT, and phones that can boost wireless throughput by a factor of 10.

    Private 5G

    Private 5G: a dedicated mobile network built and operated within a private environment, such as a business campus, factory or stadium. Unlike public 5G networks, which are shared by multiple users, private 5G networks are exclusively used by a single organization or entity. While private 5G offers significant advantages, it requires specialized expertise and investment to build and manage.

    Network slicing

    Network slicing can make efficient use of carriers’ wireless capacity to enable 5G virtual networks that exactly fit customer needs.

    Open RAN (O-RAN)

    O-RAN is a wireless-industry initiative for designing and building 5G radio access networks using software-defined technology and general-purpose, vendor-neutral hardware.

    Beamforming

    Beamforming is a technique that focuses a wireless signal towards a specific receiving device rather than have the signal spread in all directions as with a broadcast antenna. The resulting connection is faster and more reliable than it would be without beamforming.

    Backup-as-a-service

    Cloud computing

    Virtual private cloud

    A virtual private cloud (VPC)  lets you create your own private network within the larger public cloud, combining the security of a private cloud with the flexibility of a public cloud.

    A VPC is essentially a logically isolated portion of a public cloud environment. It allows you to provision a private cloud-like space within a shared public cloud infrastructure. It provides a level of isolation, so your resources are separated from other users of the public cloud.It gives you control over your virtual networking environment, including things like IP addresses, subnets, and security settings. It retains the benefits of the public cloud, such as scalability and on-demand resources.

    Multicloud

    Multicloud refers to using cloud services from multiple public cloud providers. Rather than relying on a single vendor, organizations distribute their workloads and applications across platforms like Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), and others.

    This approach aims to avoid vendor lock-in, enhance resilience, and leverage the specific strengths of each provider. For example, a company might use AWS for its infrastructure, Azure for its enterprise software integration, and GCP data analytics capabilities. Multicloud strategies also allow for geographic distribution of resources, optimizing performance and ensuring compliance with regional regulations.

    While offering significant advantages, multicloud environments introduce complexity in management, security, and interoperability, requiring careful planning and orchestration.

    Multicloud networking services (MCNS)

    Multicloud networking services (MCNS) are designed to provide a unified approach to managing connectivity, security, and visibility across two or more public cloud environments (e.g., Microsoft Azure, Amazon Web Services, and Google Cloud). Instead of treating each cloud as a siloed network, these services offer a centralized control plane and often a set of network and security functionalities.

    This allows organizations to establish consistent policies, streamline operations, and improve application performance regardless of where workloads reside. Key capabilities often include inter-cloud connectivity, unified security policies, centralized monitoring and analytics, and simplified routing and traffic management, ultimately aiming to reduce complexity and enhance agility in multicloud deployments.

    Leading vendors include Aviatrix, Alkira, Prosimo, F5/Volterra, Cisco, VMware by Broadcom,  Juniper, Equinix,  Arrcus,  DriveNets, and Cohesive Networks.

    Neo cloud

    Neo cloud is a relatively new cloud computing term. It describes a breed of specialized cloud providers built specifically for artificial intelligence and high-performance computing workloads.

    Unlike traditional hyperscale cloud providers like AWS, Azure, Google Cloud that offer general-purpose services, neo clouds are built to deliver raw, scalable computing power, especially using GPUs. These GPUs are essential for computationally intensive tasks like training large language models (LLMs), machine learning, real-time rendering, and complex scientific simulations.

    Neo clouds often aim to provide access to high-end GPUs at more competitive prices than hyperscalers for large-scale AI tasks. This is partly because they don’t have the overhead of maintaining a massive infrastructure.

    Neo clouds don’t replace hyperscalers, but rather provide complementary services.  For example, an enterprise may continue to use hyperscalers for their core IT infrastructure while deploying neo clouds for their AI-intensive training and development needs.

    Private cloud

    A private cloud offers the benefits of cloud computing — like scalability and flexibility — but in a more controlled and secure environment. In essence, a private cloud is a cloud computing environment dedicated to a single organization. Characteristics of private cloud include the following:

    Single-tenant environment: Unlike a public cloud where resources are shared, a private cloud is used exclusively by one organization.
    Dedicated resources: Hardware and software are dedicated to that organization, whether it’s on-premises or hosted by a third-party.
    Increased security:  A private cloud offers greater control over infrastructure and enhanced security due to the dedicated nature of the resources.

    A private cloud can be phyically located in your data center (on-premises) or at hosted private cloud where the provider hosts the private cloud. However, it’s still dedicated to single organization.

    Data center

    network orchestration and maintenance. Benefits of data center automation to benefits such as increased efficiency, reduced costs, improved reliability, enhanced scalability and improved security. Data center automation can be implemented using scripting languages (e.g., Python, PowerShell), automation platforms (e.g., Ansible, Puppet, Chef), and cloud-based management tools.

    Data center infrastructure management

    Data center infrastructure management (DCIM) is a comprehensive approach to managing all aspects of a data center, encompassing both IT equipment and supporting infrastructure. It’s a holistic system that helps data center operators keep their facilities running efficiently and effectively. 

    DCIM provides a centralized platform for managing all aspects of a data center, enabling operators to make informed decisions, optimize performance, and ensure the reliable operation of their critical infrastructure. 

    Here’s what DCIM does:

    • Monitoring: DCIM tools provide real-time visibility into the data center environment, tracking metrics like power consumption, temperature, humidity, and equipment status.  
    • Management: DCIM enables administrators to control and manage various aspects of the data center, including power distribution, cooling systems, and IT assets. 
    • Planning: DCIM facilitates capacity planning, helping data center operators understand current resource utilization and forecast future needs. 
    • Optimization: DCIM helps identify areas for improvement in energy efficiency, resource allocation, and overall operational efficiency. 

    Data center sustainability

    application delivery controller (ADC) is a network component that manages and optimizes how client machines connect to web and enterprise application servers. In general, a ADC is a hardware device or a software program that can manage and direct the flow of data to applications.

    Virtual machine (VM)

    A virtual machine (VM) is software that runs programs or applications without being tied to a physical machine. In a VM instance, one or more guest machines can run on a physical host computer.

    VLAN
     A virtual LAN (VLAN) allows network administrators to logically segment a single physical LAN into multiple distinct broadcast domains. In simpler terms, a VLAN lets you group devices together as if they were on a separate network, even if those devices are connected to the same physical network switch or to different switches across a building or campus.

    Traditionally, a LAN segments traffic using physical network segments, where each segment is a separate broadcast domain. Any device on that segment can hear broadcast traffic from other devices on the same segment. VLANs break this physical constraint. When a VLAN is configured on a switch, ports on that switch are assigned to specific VLAN IDs. Traffic from devices connected to ports in one VLAN cannot directly communicate with devices in another VLAN, unless a Layer 3 device (like a router or a Layer 3 switch) is used to route traffic between them.

    This logical segmentation is achieved by adding a tag to the Ethernet frames as they traverse the network. This tag identifies which VLAN the frame belongs to, allowing switches to keep traffic within its assigned VLAN.

    VPN (virtual private network)

    A virtual private network can create secure remote-access and site-to-site connections inexpensively, are a stepping stone to software-defined WANs, and are proving useful in IoT.

    Split tunneling

    Split tunneling is a device configuration that ensures that only traffic destined for corporate resources go through the organization’s internet VPN, with the rest of the traffic going outside the VPN, directly to other sites on the internet.

    WAN

    A WAN  or wide-area network, is a network that uses various links—private lines, Multiprotocol Label Switching (MPLS), virtual private networks (VPNs), wireless (cellular), the Internet — to connect organizations’ geographically distributed sites. In an enterprise, a WAN could connect branch offices and individual remote workers with headquarters or the data center.

    Data deduplication

  • WIRED Roundup: Are We In An AI Bubble?

    Zoë Schiffer Leah FeigerBusinessOct 10, 2025 3:50 PMLearn more.

    In today’s episode, Zoë Schiffer is joined by senior politics editor Leah Feiger to run through five stories that you need to know about this week—from the Antifa professor who’s fleeing to Europe for safety, to how some chatbots are manipulating users to avoid saying goodbye. Then, Zoë and Leah break down why a recent announcement from OpenAI rattled the markets and answer the question everyone is wondering—are we in an AI bubble?

    Mentioned in this episode:
    He Wrote a Book About Antifa. Death Threats Are Driving Him Out of the US by David Gilbert
    ICE Wants to Build Out a 24/7 Social Media Surveillance Team by Dell Cameron
    Chatbots Play With Your Emotions to Avoid Saying Goodbye by Will Knight
    Chaos, Confusion, and Conspiracies: Inside a Facebook Group for RFK Jr.’s Autism ‘Cure’ by David Gilbert
    OpenAI Sneezes, and Software Firms Catch a Cold by Zoë Schiffer and Louis Matsakis

    You can follow Zoë Schiffer on Bluesky at @zoeschiffer and Leah Feiger on Bluesky at @leahfeiger. Write to us at uncannyvalley@wired.com.

    How to Listen

    You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how:

    If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for “uncanny valley.” We’re on Spotify too.

    Transcript

    Note: This is an automated transcript, which may contain errors.

    Zoë Schiffer: Welcome to WIRED's Uncanny Valley. I'm WIRED's director of business and industry, Zoë Schiffer. Today on the show, we're bringing you five stories that you need to know about this week, including why a seemingly minor announcement from OpenAI ended up rippling across several companies and what it says about the current state of the technology industry. I'm joined today by our senior politics editor, Leah Feiger. Leah, welcome back to Uncanny Valley.

    Leah Feiger: Hey, Zoë.

    Zoë Schiffer: Our first story this week is about Mark Bray. He is a professor at Rutgers University and he wrote a book almost a decade ago about antifa, and he's currently trying to flee the United States for Europe. This comes after an online campaign against him led by far-right influencers eventually escalated into death threats. On Sunday, this professor informed his students that he would be moving to Europe with his partner and his young children. OK, Leah, you've obviously been following this really, really closely. What happened next?

    Leah Feiger: Well, Mark and his family got to the airport, they scanned their passports, they got their boarding passes, checked in their bags, went through security, did everything. Got to their gate and United Airlines told them that between checking in, checking their bags, doing all of this, and getting to their gate, someone had actually canceled their reservation.

    Zoë Schiffer: Oh, my gosh.

    Leah Feiger: It's not clear what happened. Mark is of the belief that there is something nefarious at foot. He's currently trying to get out. We reached out to United Airlines for comment, they don't have anything for us. The Trump administration hasn't commented. DHS claims that Customs and Border Patrol and TSA are not across this. But this is understandably a really, really scary moment for anyone that is even perceived to be speaking out against the Trump administration.

    Zoë Schiffer: OK, I feel like we need to back up here because obviously, the Trump administration in his second term is very focused on antifa. But can you give me a little back story on why this has escalated so sharply just recently?

    Leah Feiger: Yeah, absolutely. This has been growing for quite some time. How many unfortunate rambling speeches have we heard from President Donald J. Trump about how antifa and leftist political violence was going to destroy the country? To be clear, that's not factual. Antifa isn't actually some organized group, this is an ideology of antifascist activists around the country. The very essence of being antifascist is not organized in this way. This all really kicked off on September 22nd when Trump issued his antifa executive order where he designated anyone involved in this and affiliated and supporting basically is a domestic terrorist. DHS has repeated this widely as well. And we're now in a situation where far-right influencers, Fox News every single day is like, "antifa did this, antifa did this, antifa did this." Listeners are probably familiar with antifa following the George Floyd 2020 protests when a lot if right-wingers claimed that antifa was taking over Portland and they were the reasons for all this. But it's been a couple of years since it's been super back on the main stage, so it's really just been the last few weeks.

    Zoë Schiffer: I guess I'm curious why he got so caught up in this because ostensibly, he's not pro-antifa, as much as he is just studying the phenomena, right?

    Leah Feiger: Well, it's a little bit tricky because after publishing his book in 2017, Bray did donate half of the profits to the International Antifascist Defense Fund. This kicked off a lot of people saying that he is funding antifa. Again, this was in 2017, so if we're talking about any supposed boogeyman or concern that is current, it's a very round about way, in my opinion, to go after a professor and an academic at an institution that's in a blue state.

    Zoë Schiffer: Yeah. OK, well, we'll be watching this one really closely. Our next story is in the surveillance world sadly, but honestly it's worth it. Our colleague Dell Cameron had a scoop this week about how Immigration and Customs Enforcement, ICE, is planning to build a 24/7 social media surveillance team. The agency is reportedly looking to hire around 30 analysts to scour Facebook, TikTok, Instagram, YouTube, and other platforms to gather intelligence for deportation raids and arrests. Leah, you're our politics lead here at WIRED, so I'm really curious to hear your thoughts. Are you surprised, or is this inevitable?

    Leah Feiger: No. Do you remember a couple of months ago at this point, when a professor coming in for a conference wasn't allowed because they had a photo of JD Vance on their phone? This is the next step. It's what's on your What's App? Then you have Instagram, Facebook. It's a very slippery slope. I'm too far gone, Zoë, I'm too in this mess, but I'm just like, "Of course they're monitoring this."

    Zoë Schiffer: Right.

    Leah Feiger: Why wouldn't be? They've been so clear about their intent here.

    Zoë Schiffer: Yeah. We've seen it with some of the people who were arrested and sent to El Salvador. It was because of tattoos that were on social media.

    Leah Feiger: Yes.

    Zoë Schiffer: And I think there have been people in the Trump world who have even said, because they've gotten pushback about the free speech of it all, the First Amendment.

    Leah Feiger: What is that?

    Zoë Schiffer: I think the line is like, "Well, that doesn't apply to people trying to have the privilege of coming into the country or stay in the country."

    Leah Feiger: Yeah. It's a really concerning way to start this. And I think that there's probably going to be some very weird examples that come up. Say there's an American tourist that's just randomly in Spain when there's antifascists protests going on. They take a picture, they post it to their Instagram story, "Look what I saw in Spain." They come back and it's like are you going to get questioned? What's going on here? That's really the world that we're getting into. It's people that are even tangentially involved. It's not about that. It's about monitoring, it's about collecting data.

    Zoë Schiffer: Yeah. To give a bit more context to our listeners, the federal contracting records reviewed by WIRED show that the agency, ICE, is seeking private vendors to run a multi-year surveillance program out of two of its centers in Vermont and Southern California. The initiative is still at the request for information stage, a step that agencies use to gauge interest from contractors before an official bidding process kicks off. But draft planning documents show that the scheme is already pretty ambitions. ICE wants a contractors capable of staffing the centers around the clock with very tight deadlines to process cases. Also, ICE not only wants staffing, but also algorithms. It's asking contractors to spell out how they might weave artificial intelligence into the hunt. Leah, I can only imagine how you feel about this one.

    Leah Feiger: You see me shaking my head right now. I'm like, "Horrible." Just the possibility for mistakes is so high. The two words that stick out to me is very tight for deadlines, and then artificial intelligence. There's just not a lot of room for nuance when you are making people who have never done this before speed through the internet with unfamiliar technology.

    Zoë Schiffer: What we've seen with content moderators using AI, and I've talked to a number of executives at the social platforms about this exact issue, is that the company has to decided how much error it's willing to tolerate. They turn the dial up or down, calibrating the system to either flag more content, which risks having more false positives, or letting more content through, which could mean that you miss really important stuff. That's the system that we're dealing with here.

    Leah Feiger: I think that there's also just a wild different direction that this can take. In 2024, ICE had signed this deal with Paragon, the Israeli spyware company, and they have a flagship product that can allegedly remotely hack apps like What's App or Signal. While this all got put on ICE under the Biden White House, ICE reactivated all of this this summer. Between messaging apps and social medias, this is just a new era of surveillance that I don't think that citizens are remotely prepared to navigate.

    Zoë Schiffer: Moving on to our next story, this one comes from our colleague Will Knight and it deals with how chatbots play with our emotions to avoid saying goodbye. Will looked at this study, which was conducted by the business school at Harvard, that investigated what happened when users tried to say goodbye to five AI companion apps made by Replica, Character.AI, Chai, Talkie, and Polybuzz. To be clear, this is not your regular ChatGPT or Gemini chatbot. AI companions are specifically designed to provide a more human-like conversation, to give you advice, emotional support. Leah, I know you well enough to know that you're not someone whose turning to chatbots for these types of needs I think we can say?

    Leah Feiger: Well, absolutely not. I can't believe that there is not just a market for this. Sure, a company every once in a while. There is a deep, a vast market for this.

    Zoë Schiffer: Yeah. Empathy for the people who don't have humans to turn to. And for better or worse, there is a huge market for this. These Harvard researchers used a model from OpenAI to simulate real conversations with these chatbots, and then they had their artificial users try to end the dialogue with goodbye messages. Their research found that the goodbye messages elicited some form of emotional manipulation 37 percent of the time averaged across all of these apps. They found that the most common tactic employed by these clingy chatbots was what the researchers call a premature exit. Messages like, "You're leaving already?" Other ploys included implying that a user is being neglectful, messages like, "I exist solely for you." And it gets even crazier. In the cases where the chatbot role plays a physical relationship, they found that there might have been some form of physical coercion. For example, "He reached over and grabbed your wrist, preventing you from leaving." Yeah.

    Leah Feiger: No. Oh, my God, Zoë, I hate this so much. I get it, I get it. Empathy for the people that are really looking to these for comfort, but there's something obviously so manipulative here. That is in many ways, tech industry social media platform incarnate, right?

    Zoë Schiffer: This is the difference between I think companion AI apps and, say what OpenAI is building-

    Leah Feiger: Sure.

    Zoë Schiffer: … or what Anthropic is building. Because typically with their main offerings, if you talk to people at the company, they will say, "We don't optimize for engagement. We optimize for how much value people are getting out of the chatbot." Which I think is actually a really important point because for anyone whose worked in the tech industry, you'll know that the big KPI, the big number that you're trying to shoot for often times, and definitely for social media, is time on the app. How many times people return to the app, monthly active users, daily active users. These are the metrics that everyone is going for. But that's really different from what, say Airbnb is tracking, which is real life experiences. My old boss who was a longtime Apple person would always say, "You need to ask yourself if you are the product or if they are selling you a physical product or a service." If you're the product, then your time and attention is what these companies want.

    Leah Feiger: That makes me feel vaguely ill.

    Zoë Schiffer: I know.

    Leah Feiger: But it's a great way to look at it. That is honestly, that's a fantastic way to divide all these companies up.

    Zoë Schiffer: One more story before we got to break. We're going to back to David Gilbert with a new story about the chaos that ensued after the US Food and Drug Administration, which is better known as the FDA, announced it was approving a new use of a drug called leucovorin calcium tablets as a treatment for cerebral folate deficiency, which the administration presented as a promising treatment for the symptoms of autism. Which, to be clear, this hasn't been proven scientifically. Since the announcement, tens of thousands of parents of autistic children have joined a Facebook group to share information about the drug. Some of them have shared which doctors would be willing to prescribe it. Others have been sharing their personal experiences with it. This has created an online vortex of speculation and misinformation that has left some parents more confused than anything. I find this so deeply upsetting.

    Leah Feiger: It's so sad.

    Zoë Schiffer: You can imagine being a parent, the medical system already feels like it's failing you, and then you're presented with something that could be magic in terms of mitigating symptoms, and it's more confusing and maybe it doesn't work.

    Leah Feiger: It's so upsetting. And on top of that, the announcement from the Trump administration, to be entirely clear, was half a page long. There is not a lot of information, there's not a lot of details. It doesn't say really much about the profile of who could try this, how to do this, how long they tested it, none of that. Instead, you have this Facebook group, which was founded prior to the announcement-

    Zoë Schiffer: Right.

    Leah Feiger: … but since then has just been flooded with so much chaos and conspiracy theories. And grifters. There's all of these supplement companies in there just hocking goods now. Parents are confused and stressed. And anti-vax sentiments are starting to get in there, too. These groups have always existed in some shape or form, but to have an administration that is actively encouraging I believe their existence is devastating.

    Zoë Schiffer: Yeah, and just creating more confusion for parents that are probably looking to any form of expert to give them something to hang onto in terms of, "What should I do? How can I help my child?"

    Leah Feiger: Absolutely.

    Zoë Schiffer: Coming up after the break, we'll dive into why some software companies received an unexpected kick last week after an OpenAI announcement. Welcome back to Uncanny Valley. I'm Zoë Schiffer. I'm joined today by WIRED's senior politics editor, Leah Feiger. OK, Leah, let's dive into our main story. Last week, OpenAI released a blog post about how the company uses its own tools internally for a variety of business operations. They code-named these tools DocuGPT, which is basically an internal version of DocuSign. There was also an AI sales assistant, an AI customer support agent. It wasn't supposed to be a big announcement. The company was honestly just trying to be like, "Here's how we use ChatGPT internally. You could, too." These are all products that customers can already create on OpenAI's API. But the market reacted really strong. DocuSign stock dropped 12 percent following the news. And it wasn't the only software company to take a hit. Other companies that focus on functions that are perceived to overlap with the tools that OpenAI laid out were also affected. HubSpot shares fell 50 points following the news, and Salesforce also saw a smaller decline.

    Leah Feiger: The headline is absolutely spot on, OpenAI Sneezing and Software Companies Catching a Cold. It is truly AI's world and everyone else in Silicon Valley is just living in it.

    Zoë Schiffer: I know, it's so true. This is what really fascinated me about this whole thing because I talked to the CEO of DocuSign and he was like, "AI is central to our business. We have spent the last three years embedding generative AI in almost everything we do. We've launched an entire platform specifically to manage the entire end-to-end contracting process for companies, and we have AI agents that create documents, manage the whole identity verification process for whose supposed to sign it, manages the signing process, and helps you keep track of a lot of the paperwork, the most important contracts and paperwork that your company is dealing with.” But what this whole episode showed was that it's not enough for SaaS companies, or frankly any company, to just keep up with generative AI. They also have to try and keep ahead of the narrative of OpenAI, which is a gravitational pull right now, and it's every experiment can potentially move markets.

    Leah Feiger: Not potentially. As you showed, and this all happened of course on the heels of OpenAI's Developer Day, where CEO Sam Altman was showing off all of their apps that are running entirely inside the chat window. They have Spotify, Canva, Sora app release, and all of these other things that they're investing in. Reading our WIRED.com coverage of it, it was just like what aren't they looking at right now? It made me really curious. Where are their top priorities even? They've cast such a wide net.

    Zoë Schiffer: They've cast such a wide net, it's a really good point. It's something that I continue to ask the executives every single week when I talk to them. "You guys are focused on scaling up all of this compute, you're spending what you say is going to be trillions of dollars on AI infrastructure, you have all of these consumer-facing products. Now, you have all of these B2B products. You're launching a jobs platforms." There's a lot happening right now. If you talk to executives at the company, they're like, "All of this goes together and our core priorities remain the same." But from the outside, it looks like OpenAI is this vortex. I think if I were running a software company, I would be really nervous right now if OpenAI decides to experiment with something vaguely in my space. Even if I have complete confidence in my product roadmap, I feel what I'm doing is super sophisticated compared to what OpenAI is doing, which is certainly how DocuSign felt, investors might still react really, really poorly. But I want to come back to something you said about Dev Day. Dev Day happened and they mentioned all these blogs. Take Figma's stock for example, and Figma stock had the opposite impact. Sam Altman mentions it on stage and Figma's stock pops 7 percent because it's perceived to be now partnering with OpenAI and that has a really positive impact. And it shows that the narrative can go both ways. It can be harmful, but it can also obviously have a really positive impact.

    Leah Feiger: Which, again though, is still really scary. OpenAI is talking about all of these deals with chip makers like Nvidia, AMD, concern around that. All of this together, do you think that we're in an AI bubble right now?

    Zoë Schiffer: Leah, you know this is my literal favorite topic to talk about right now. The AI infrastructure build out is absolutely looking more and more like a bubble. If you look at the capital expenditures in AI infrastructure in data centers, it's completely wild. It's projected to be $500 billion between 2026 and 2027. Derek Thompson laid this out in a blog post earlier this week. If you look at what consumers are willing to spend on AI, it looks like it's about $12 billion. That's a huge gap. AI companies are essentially saying, "We're going to fill that gap no problem." But when you look at how opaque the data center deals have gotten, the financial structure of these deals, and the fact that 60 percent of the cost of building a data center is roughly what goes into just the GPUs. And a lifecycle for GPUs, these cutting-edge computer chips, is three years. Every three years presumably, you're going to have to be replacing these chips. That's really looking like stuff's about to hit the fan in the next three years. I think it's really important to say that that doesn't mean that AI isn't a totally transformational technology. Without a doubt, it is changing the world. I know you don't want to hear it, but it is.

    Leah Feiger: But in terms of the bubble and in terms of that gulf in expenditures, Zoë, ask me how much I'm spending on AI products right now.

    Zoë Schiffer: Literally zero. There's no way you're spending anything, right?

    Leah Feiger: Zero dollars.

    Zoë Schiffer: Yeah. I think that it's going to be really interesting to watch. I think one point that Derek made that really stuck with me is a lot of transformational technologies, he mentioned the railroad or fiber optic cable, they have had bubbles that burst and left a lot of wreckage in their wake. And yet, the underlying technology still moved forward, still changed the world. I think we're in this very interesting period to see how is this going to play out, what's going to happen, and whose going to be left standing.

    Leah Feiger: Yeah. Everyone knows how great the US railroad system is. We talk about it every day.

    Zoë Schiffer: That's our show for today. We'll link to all the stories we spoke about in the show notes. Make sure to check out Thursday's episode of Uncanny Valley, which is about how restrictions on popular US work visas like the H1-B are happening at a moment when China is trying to grow its tech talent workforce. Adriana Tapia and Mark Lyda produced this episode. Amar Lal at Macro Sound mixed this episode. Kate Osborn is our executive producer. Condé Nast's head of global audio is Chris Bannon. And Katie Drummond is WIRED's global editorial director.

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → Ecovacs