SkyWatchMesh – UAP Intelligence Network

UAP Intelligence Network – Real-time monitoring of official UAP reports from government agencies and scientific institutions worldwide

Author:

  • Network digital twin technology faces headwinds

    Network digital twin technology faces headwinds

    What if there were a way to reduce by as much as 70% the incidence of network outages caused by poorly executed software upgrades or the faulty installation of new hardware? What if there were a way to validate the current state of network configurations and track configuration drift to avoid network downtime, performance degradation or security breaches linked to misconfigurations of firewalls and other mission critical network components?

    By applying digital twin technology, network teams can reap the benefits of modeling complex networks in software rather than what many enterprises do today – spend millions of dollars on a shadow IT testing environment or not test at all.

    Digital twin technology is most commonly used today in manufacturing environments, and while it has immense promise in enterprise network environments, there are hurdles that need to be overcome before it becomes mainstream.

    Digital twin: What it is and what it isn’t

    The way Fabrizio Maccioni describes it, digital twin is analogous to Google Maps.

    First, there’s a basic mapping of the network. And just like Google Maps is able to overlay information, such as driving directions, traffic alerts or locations of gas stations or restaurants, digital twin technology enables network teams to overlay information, such as a software upgrade, a change to firewalls rules, new versions of network operating systems, vendor or tool consolidation, or network changes triggered by mergers and acquisitions.

    Network teams can then run the model, evaluate different approaches, make adjustments, and conduct validation and assurance to make sure any rollout accomplishes its goals and doesn’t cause any problems, explains Maccioni, senior director of product marketing for digital twin vendor Forward Networks.

    However, digital twin technology is not real time. “We don’t change anything. We’re read only. We don’t change the configuration of network devices,” Maccioni says. (Forward Networks does provide integrations with workflow automation vendor ServiceNow and with the open-source automation engine Ansible.)

    Gartner analyst Tim Zimmerman adds: “These tools typically operate on near-real time or snapshot-based data, which supports validation and documentation but limits their usefulness for real-time troubleshooting or active incident response. This distinction is important. While digital twins can improve planning and reduce cost associated with change, they are not currently positioned as operational tools for live network management.”

    “As a result, adoption has been largely limited to large, complex environments that can justify the investment in additional management software,” Zimmerman says.

    What are the benefits of digital twin in networking?

    “Configuration errors are a major cause of network incidents resulting in downtime,” says Zimmerman. “Enterprise networks, as part of a modern change management process, should use digital twin tools to model and test network functionality business rules and policies. This approach will ensure that network capabilities won’t fall short in the age of vendor-driven agile development and updates to operating systems, firmware or functionality.”

    Gartner estimates that organizations using network digital twins to model configuration and software/firmware updates can reduce unplanned outages by 70%.

    Zimmerman adds that 15% of security breaches are caused by cloud misconfigurations or reconfigurations associated with common use cases like migrating an on-prem app to the cloud. He adds that digital twin tools can ensure that network policies don’t conflict with or prevent data flows as applications are migrated to the public cloud. Other use cases cited by Zimmerman include:

    • Capacity planning to model future traffic growth and infrastructure requirements.
    • Incident replay to reconstruct past outages or breaches to analyze root causes.
    • Security posture validation to simulate attack scenarios, as well as testing network segmentation, firewall policies.
    • Simulating boundary conditions that might differ from expected outcomes.

    The top driver for enterprise customers is risk mitigation, says Scott Wheeler, cloud practice lead at Asperitas Consulting, which provides an as-a-service option for network digital twins. “It’s a place to test thing out to make sure the project doesn’t mess everything up.” For example, one enterprise client with a large global network used digital twin technology to model the consolidation of four routing protocols into one. “That implementation went off without a hitch,” says Wheeler.

    Another valuable use case is testing failover scenarios, says Wheeler. Network engineers can design a topology that has alternative traffic paths in case a network component fails, but there’s really no way to stress test the architecture under real world conditions. He says that in one digital twin customer engagement “they found failure scenarios that they never knew existed.”

    Maccioni adds that there are a variety of use cases that are attracting enterprise interest. Some customers start with firewall rules administration, a task that a large enterprise might spend millions of dollars a year on. Once an organization recognizes the benefits of automating firewall rule management, they might branch out into other areas, such as outage prevention, troubleshooting, and compliance.

    “We’re also starting to see security use cases be a driver,” Maccioni says. Digital twin technology can help organizations create a single source of truth that helps eliminate friction between security operations and network operations teams when it comes to troubleshooting.

    What are the barriers to widespread adoption?

    One of the major barriers is that network digital twin is not offered by the major infrastructure vendors or network management vendors as part of their core functionality. That may change, but for now, if you want to deploy digital twin you need to engage with a third-party provider. “This is a whole new project, a whole separate environment. It’s a good-sized effort,” Wheeler explains.

    And there doesn’t seem to be a standard way to accomplish digital twin. For example, Forward Networks uses a proprietary data collection method called Header Space Analysis, which was developed while the founders of the company were at Stanford University. It enables the creation of a virtual copy of a network using configuration data and operational state information. 

    Forward Networks enables customers to perform queries against the model. And it overlays other types of data, such network performance monitoring, in order to facilitate troubleshooting. The snapshot process (collecting and processing the data) can take several hours in a large enterprise network and might be conducted, for example, a couple of times a day. So, the model is current, but not real-time.

    Asperitas uses an open-source framework called EVE-NG (emulated virtual environment – next generation) to reverse engineer the network. Wheeler explains if enterprise network engineers wanted to create a digital twin using EVE-NG, they would have to take on the coding work required to build the virtual network and would also need to constantly update it to reflect changes to the network.

    Wheeler adds that deploying digital twin requires a significant effort, both in terms of complexity and cost. And it is typically limited to modeling the impact of a change involving a single component from a single vendor. Or to a specific part of the network, such as a campus, says Zimmerman.

    Even within a campus environment, Zimmerman has identified three levels of digital twins: The first level is network configuration and parameter/policy validation; the second level is single vendor equipment replacement or upgrade; and the third level is multiple vendor migration or vendor replacement.

    The future of digital twins in networking

    Gartner points out that “enterprise IT leaders continue to face a combination of challenges: increasing network complexity, heightened cybersecurity risk, and a shortage of skilled personnel. In this context, enterprise network digital twins are emerging as a tool to support network resilience and operations planning.”

    But that won’t happen overnight. Gartner expects that in the next 3-5 years, digital twins will be used to model parts of campus networks, and within the next 10 years they will expand to the entire network.

    Maccioni says network digital twin technology adoption had been somewhat slow because the technology represented a new concept for network engineers. “It is now resonating more with customers” as awareness grows and as enterprises begin to allocate budget for digital twin, he adds.

    Wheeler agrees that there are headwinds, including the fact that “you don’t have a lot of push from large network vendors to do it.” But he adds, “If some of those barriers are knocked down, I think you’ll see accelerated adoption.”

    Zimmerman adds that, “for broader adoption, we feel that the ability to model composite networks of individual components (whether it is a single vendor network or ultimately, a network with multiple vendor components) is needed to move the market ahead.”

    However, there’s a huge difference between deploying digital twin in a factory and in a global enterprise network. A manufacturing facility is a controlled environment with a discrete number of devices and a fixed, linear production process. A global network can have tens of thousands of endpoints and is dynamic – end users are mobile, data paths change in real time, etc.

    The ultimate vision, says Zimmerman, is a digital twin that “gives enterprise IT leaders the ability to test day-to-day operational workflows on their existing end-to-end network, simulating any operating system or configuration changes in real time and testing boundary conditions that today must be manually configured.”

    But, he adds, “this may require the processing power of quantum computing and the storage capacity of the cloud.”

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → Surfshark

  • Noncontact Motion Sensor Brings Precision to Manufacturing

    Noncontact Motion Sensor Brings Precision to Manufacturing

    Aeva Technologies, a developer of lidar systems based in Mountain View, Calif., has unveiled the Aeva Eve 1V, a high-precision, noncontact motion sensor built on its frequency modulated continuous wave (FMCW) sensing technology. The company says that the Eve 1V measures an object’s motion with accuracy, repeatability, and reliability—all without ever making contact with the material. That last point is key for the Eve 1V’s intended environment: Industrial manufacturing.

    Today’s manufacturing lines are under pressure to deliver faster production, tighter tolerances, and zero defects, often while working with a wide variety of delicate materials. Traditional tactile tools such as measuring wheels and encoders can slip, wear out, and cause costly downtime. Many noncontact alternatives, while promising, are either too expensive or fall short in accuracy and reliability under real-world conditions, says Mina Rezk, cofounder and chief technology officer at Aeva.

    “Eve 1V was built to solve that exact gap: A compact, eye-safe, noncontact motion sensor that delivers submillimeter-per-second velocity accuracy without touching the material, so manufacturers can eliminate slippage errors, avoid material damage, and reduce maintenance-related downtime, enabling higher yield and more predictable operations,” Rezk says.

    Unlike traditional lidar that sends bursts of light and waits for those bursts to return to make measurements, FMCW continuously emits a low-power laser while sweeping its frequency. By comparing outgoing and returning signals, it detects frequency shifts that reveal both distance and velocity in real time. The additional measurement of an object’s velocity to its position in three-dimensional space makes FMCW a type of 4D lidar.

    Eve 1V is the second member of its Eve 1 family, following the launch of the Eve 1D earlier this year. The Eve 1D is a compact displacement sensor capable of detecting movement at the micrometer scale, roughly 1/100 the thickness of a human hair. “Together, Eve 1D and Eve 1V show how we can take the same FMCW perception platform and tailor it for different industrial needs: Eve 1D for distance measurement and vibration detection, and Eve 1V for precise velocity and length measurement,” Rezk says.

    Future applications could extend into robotics, logistics, and consumer health, where noncontact sensing may enable the detection of microvibrations on human skin for accurate pulse and blood-pressure readings.

    FMCW Lidar for Precision Manufacturing

    The company’s core FMCW architecture, originally developed for long-range 4D lidar for automobiles, can be adjusted through software and optics for highly precise motion sensing at close range in manufacturing, according to Rezk. This flexibility means the system can track extremely slow movements, down to fractions of a millimeter per second, in a factory setting, or it can monitor faster motion over longer distances in other applications.

    By avoiding physical contact, Eve 1V eliminates wear and tear, slippage, contamination, or the need for physical access to the part. “That delivers three practical advantages in a factory: One, maintenance-free operation with no measuring wheels to replace or recalibrate; two, material friendliness—you can measure delicate, soft, or textured surfaces without risk of damage, and three, operational robustness—no slippage errors and fewer stoppages for service,” Rezk says. Put together, that means more uptime, steady throughput, and less scrap, he adds.

    When measuring velocity, engineers often rely on one of three tools: encoders, laser velocimeters, or camera-based systems. Each has its strengths and its drawbacks. Traditional encoders are low-cost but can wear down over time. Laser-based velocity-measurement systems, while precise, tend to be large and expensive, making them difficult to implement widely. And camera-based approaches can work for certain inspection tasks, but they usually require markers, controlled lighting, and complex processing to measure speed accurately.

    Rezk says that the Eve 1V system offers a balance of these options. It provides precise and consistent velocity measurements without contacting material, making it compact, safe, and simple to install. Its outputs are comparable with existing encoder systems, and because it doesn’t rely on physical contact, it requires minimal maintenance.

    This approach helps cut down on wasted energy from slippage, eliminates the need for maintenance tied to parts that wear out, and ultimately lowers long-term operating costs—especially when compared with traditional contact-based systems or expensive laser options.

    This method avoids stitching together frame-by-frame comparisons and resists interference from sunlight, reflections, or ambient light. Built on silicon photonics, it scales from micrometer-level sensing to millimeter-level precision over longer ranges. The result is clean, repeatable data with minimal noise—outperforming legacy lidar and camera-based systems.

    Aeva is expecting to begin full production of the Eve 1V in early 2026. The Eve 1V reveal follows a recent partnership with LG Innotek, a components subsidiary of South Korea’s LG Group, under which Aeva will supply its Atlas Ultra 4D lidar for automobiles, with plans to expand the technology into consumer electronics, robotics, and industrial automation.

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → EconomyBookings

  • Netskope expands ZTNA with device intelligence for IoT/OT environments

    Netskope expands ZTNA with device intelligence for IoT/OT environments

    Netskope this week announced it had updated its universal zero-trust network access (ZTNA) solution to extend secure access capabilities to Internet of Things (IoT) and operational technology (OT) devices that typically cannot run traditional agent software.

    Netskope says the product updates will help organizations address the security challenges of complex hybrid enterprise environments. The Universal ZTNA solution, which comprises Netskope One Private Access and Netskope Device Intelligence, now includes context-aware device intelligence capabilities that automatically discover and classify device risk through the 5G Netskope One Gateway. The company says the updated capabilities enable organizations to implement zero-trust policies for machines and robots that can’t support agent-based security tools.

    “Legacy VPNs, NACs, and early ZTNA tools weren’t designed for the scale, speed, or diversity of today’s enterprises,” said John Martin, chief product officer of Netskope, in a statement. Universal ZTNA, gives organizations a consistent way to secure users and devices, whether they are remote or on the local network, he said. “Through smarter, risk-based policies, embedded protection, and seamless performance, we’re helping organizations cut complexity, reduce risk, and turn secure access into an enabler, rather than a barrier.”

    Robert Arandjelovic, senior director of global solution strategy at Netskope, explains that enterprise organizations are adopting universal ZTNA to expand beyond conventional security service edge (SSE) and ZTNA solutions and more effectively secure users and IoT/OT devices across all technology environments. According to a Gartner report, Universal ZTNA is expected to experience widespread adoption and grow by more than 40% by 2027.

    “Universal ZTNA provides this amazing point of consolidation, and it is because tool sprawl is already very present. If we can kill two birds with one stone, and modernize and simplify at the same time, that’s a huge driver. Security teams are never not going to be under budget pressure. In the security space, there are always more things to do and money to spend on it,” Arandjelovic says.

    The enhanced solution also reflects the ongoing convergence of networking and security technologies. Device Intelligence extends remediation and access control to east-west traffic through integrations with third-party NAC vendors, he says, while the firewall capabilities of Netskope One Gateway and Netskope One SSE provide zero trust enforcement points for north-south traffic.

    Netskope is also introducing embedded Universal ZTNA threat and data protection capabilities that inspect private application traffic for remote and local users. This unified approach addresses threats before they reach the network and safeguards sensitive data across all users and devices, Arandjelovic says.

    Netskope is using AI to streamline ZTNA management through its Netskope One Copilot for Private Access feature. The policy optimization tool uses AI to automate granular policy creation for discovered applications while continuously refining and auditing configurations. This is designed to help organizations accelerate ZTNA adoption, reduce complexity, and scale zero-trust implementations with less risk, according to Netskope.

    The enhanced Universal ZTNA solution, including Netskope One Private Access and Netskope Device Intelligence, is available now. More information is available on the Netskope blog.

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → PaternityLab

  • AMD: Latest news and insights

    AMD: Latest news and insights

    More processor coverage on Network World:
    Intel news and insights | Nvidia news and insights

    AMD continues to make gains in processor and data center markets thanks largely to its EPYC processors, which has chipped away at Intel’s long-standing dominance.  

    According to AMD’s Q1 2025 results, revenue increased 36% over the same quarter in 2024 to $7.4 billion. More specifically, its data center segment revenue jumped by 57% year-over-year to $3.7 billion, driven largely  by demand for EPYC CPUs and growing sales of AMD Instinct GPUs.

    The chip company also recently unveiled and began shipping the Instinct MI350 GPUs and previewed its next-generation, AI-focused MI400 series for future AI racks, underscoring its commitment to an open software ecosystem (ROCm).”

    Finally, AMD’s Client segment revenue jumped 68% to $2.3 billion, fueled by strong demand for its “Zen 5” AMD Ryzen™ processors, including its Ryzen AI 300 series, indicating a strong push for “AI PC” market share.

    This news comes despite AMD facing a projected $1.5 billion revenue hit for fiscal 2025 due to U.S. export restrictions on AI chips to China.

    Latest AMD news and analysis

    AMD/OpenAI pact means new enterprise IT options

    October 7, 2025: Monday’s announcement that OpenAI and AMD have struck a deal could mean that AMD chips may become a viable enterprise IT option. That is good news, not because of AMD quality, which is seen as suboptimal by some, but because of the limits of Nvidia chip availability.

    AMD could be Intel’s next foundry customer

    October 3, 2025: AMD might be the latest Silicon Valley giant to join the Intel bailout parade as there are rumors that AMD is in talks to become an Intel Foundry customer. It’s unknown just how much of AMD’s business would move to Intel. AMD splits its business between TSMC, and GlobalFoundries.

    IBM, AMD team on quantum computing

    August 26, 2025: IBM and AMD are working to blend Big Blue’s quantum computers with the chipmaker’s CPUs, GPUs and FPGAs to build intelligent, quantum-centric, high-performance computers. They plan to demonstrate later this year how IBM quantum computers can work with AMD technologies to deploy hybrid quantum-classical workflows . 

    AMD warns of new Meltdown/Spectre-like CPU bugs

    July 11, 2025: AMD has issued an alert to users of a newly discovered form of side-channel attack similar to the infamous Meltdown and Spectre exploits that dominated the news in 2018. The potential exploits affect the full range of AMD processors – desktop, mobile and data center models — particularly 3rd and 4th generation Epyc server processors.

    DigitalOcean teams with AMD for low-cost GPU access

    June 25, 2025: Cloud infrastructure provider DigitalOcean Holdings announced a collaboration with AMD to provide DigitalOcean customers with low-cost access to AMD Instinct GPUs starting later this year.

    AMD rolls out first Ultra Ethernet-compliant NIC

    June 23, 2025: AMD will be the first to market with a Ultra Ethernet-based networking card, and Oracle will be the first cloud service provider to deploy it. The announcement came at the recent Advancing AI event, where AMD introduced its latest Instinct MI350 series GPUs and announced the MI400X, which will be delivered next year.

    AMD steps up AI competition with Instinct MI350 chips, rack-scale platform

    June 13, 2025: AMD has launched its latest accelerator chips and offered a glimpse into its AI infrastructure strategy, aiming to expand its role in the enterprise market, which Nvidia currently dominates.

    AMD launches new Ryzen Threadripper CPUs to challenge Intel’s workstation dominance

    May 21, 2025: Marking an aggressive push into the professional workstation and high-end desktop (HEDT) segments, AMD launched its latest HPC processors.

    Survey: AMD continues to take server share from Intel

    May 20, 2025: AMD continues to take market share from Intel, growing at a faster rate and closing the gap between the two companies to the narrowest it has ever been.

    AMD, Nvidia partner with Saudi startup to build multi-billion dollar AI service centers

    May 15, 2025: As part of the avalanche of business deals that came from President Trump’s Middle East tour, both AMD and Nvidia have struck multi-billion dollar deals with an emerging Saudi AI firm.

    AMD targets hosting providers with affordable EPYC 4005 processors

    May 14, 2025: AMD launched its latest set of data center processors, targeting hosted IT service providers. The EPYC 4005 series is purpose-built with enterprise-class features and support for modern infrastructure technologies at an affordable price, the company said.

    Jio teams with AMD, Cisco and Nokia to build AI-enabled telecom platform

    March 18, 2025: Jio has teamed up with AMD, Cisco and Nokia to build an AI-enabled platform for telecom networks. The goal is to make networks smarter, more secure and more efficient to help service providers cut costs and develop new services.

    AMD patches microcode security holes after accidental early disclosure

    February 3, 2025: AMD issued two patches for severe microcode security flaws, defects that AMD said “could lead to the loss of Secure Encrypted Virtualization (SEV) protection.” The bugs were inadvertently revealed by a partner.

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → roboform

  • Is It All About The Resources? UFOs, Water Extraction, and Power Blackouts!

    Is It All About The Resources? UFOs, Water Extraction, and Power Blackouts!

    There are many aspects to the UFO and alien question, but the notion that they are here for our raw materials, a consequence of which seemingly results in mass power blackouts, is one that is often left unaddressed. The fact is, there are many such accounts on record indicating that these seemingly otherworldly vehicles are not only using our own power reserves for their own propulsion systems (we might assume) but are also extracting water from our rivers and even water towers, often thousands of gallons at a time.

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → hotel-deals

  • AMD/OpenAI pact means new enterprise IT options

    AMD/OpenAI pact means new enterprise IT options

    Monday’s announcement that OpenAI and AMD have struck a deal, albeit an unusual one without cash commitments, could mean that AMD chips may become a viable enterprise IT option. That is good news, not because of AMD quality, which is seen as suboptimal by some, but because of the limits of Nvidia chip availability.

    The Monday announcement simply said that the two companies would work together and that they had crafted “a 6 gigawatt agreement to power OpenAI’s next-generation AI infrastructure across multiple generations of AMD Instinct GPUs. The first one gigawatt deployment of AMD Instinct MI450 GPUs is set to begin in the second half of 2026.”

     [ Related: More AMD news and insights ]

    That likely means anywhere from 3.5 million to 5 million chips, according to Moor Insights & Strategy. “AMD is now able to seed the market with a lot of its GPUs,” said Matt Kimball, a Moor VP and principal analyst. 

    Nvidia supply limits a big factor

    Under other circumstances, that OpenAI endorsement might not mean much, but enterprise IT executives are finding it increasingly difficult to purchase GPUs from Nvidia, so this gives them a critically needed second source for those chips.

    An AMD spokesperson, Drew Symonds of AMD corporate communications, told Network World in an email that “OpenAI is purchasing the GPUs” but couldn’t specify the amount or whether there was a direct payment in cash. 

    [ RelatedWhat are GPUs? The processing power behind AI ]

    “Best I can refer you to about revenue expectations is a quote in our press release from Jean Hu, CFO, AMD. ‘Our partnership with OpenAI is expected to deliver tens of billions of dollars in revenue for AMD while accelerating OpenAI’s AI infrastructure buildout,’” Symonds wrote. 

    But that doesn’t specify that the dollars referenced would come from OpenAI. Others have interpreted the remark as referring to potential increased revenue from companies buying from AMD because of the OpenAI endorsement.

    Rodolfo Rosini, CEO of Vaire Computing, said the supply problems with Nvidia are absolutely a critical background factor for the AMD-OpenAI deal.

    “There is unbound demand for Nvidia hardware, but a limited supply, and upstream there is a limited supply of wafers from TSMC to Nvidia,” Rosini said. “So now the demand is overspilling into competing offerings, as AI companies can’t stand still while they wait for an allocation.”

    AMD and OpenAI are deepening the hardware and software collaboration that began with the launch of the MI300X in December 2023, they said in a joint statement, which quoted OpenAI President Greg Brockman as saying, “Building the future of AI requires deep collaboration across every layer of the stack.”

    Analysts suggested that collaboration could take the form of OpenAI making improvements to, or even guiding development of, ROCm, a software stack for AMD GPUs that competes with CUDA, Nvidia’s equivalent for its processors.

    Rosini also saw some product weaknesses at AMD playing an outsized role.

    “AMD’s software stack is bad, but that is a bigger issue for training than for inference. They were always viable for enterprise use. They just could not command premium pricing like Nvidia does, and developers preferred [Nvidia’s] CUDA,” Rosini said. “[OpenAI] directing AMD’s software roadmap instead of the management of AMD will be great. Labs like OpenAI know exactly what they want and will be very vocal about it.”

    Chip supplier diversity needed

    Another analyst, Jack Gold, principal analyst for J. Gold Associates, agreed.

    “This is an indication that OpenAI recognizes a need to diversify its processor suppliers, as it continues to expand its data centers. The most advanced Nvidia GPUs are on allocation, with buildouts outpacing supplies,” Gold said. “By solidifying AMD chip supplies through this commitment and investment, OpenAI can continue its massive build out campaign.”

    Gold said that he also expects this to fuel more AMD purchases from enterprises. 

    “It’s highly likely that other major AI players will follow suit and deploy AMD-powered datacenters, even more so than with the current movement, with AMD GPUs seen as the secondary supplier,” Gold said. 

    However, Gold disagreed with Rosini’s poor assessment of AMD software. “AMD software is not all that terrible,” he said, noting that the problem is the popularity of CUDA. He said, “you can’t just take CUDA and put it on an AMD chip,” and that means that OpenAI will have to write “some level of abstraction.” He added, “if I am writing that level of abstraction, do I really care what the underlying chip is?”

    Gold estimated that the chips being delivered are roughly worth $180 billion, assuming that the deal will likely need about 6 million chips to reach the six gigawatts mentioned in the news release and he sees those chips typically selling for about $30,000 each.

    “These guys [enterprise IT executives] are going to look at it seriously, especially if they can’t get Nvidia chips,” Gold said. “An endorsement by OpenAI is worth a lot.”

    Moor’s Kimball’s estimate of the number of chips needed for this deal is lower, suggesting it will be “anywhere from 3.5 million to 5 million GPUs.”

    He said that this deal might help AMD dig itself out of the AI hole it has found itself in for years. 

    “AMD has been struggling to capture market share relative to Nvidia, despite a very good architecture. Hardware-wise it is superior,” said Kimball who disagrees with Rosini’s assessment of AMD’s software. The fact that it is not compatible with CUDA is a weakness for AMD due to CUDA’s popularity among enterprise IT leaders, he said: “It’s been an issue forever. ROCm is its Achilles heel. It is not used and it is not compatible with CUDA.”

    “AMD needs to seed the market and it needs to get some adoption out there, and OpenAI is a great vehicle for that,” Kimball said.

    OpenAI has the leverage today

    What is odd about the agreement is that nothing in the copious amount of detail published — the SEC 8K filing alone has multiple attachments — indicates that OpenAI will be paying any money for these chips, or at least not a specific agreed amount.

    Abhishek Singh, a partner at the Everest Group, sees this deal as not being about money. And Singh also sees it as a very smart move for both OpenAI and AMD.

    “It is asymmetric, isn’t it? And why wouldn’t it be? OpenAI has the leverage right now. Every chipmaker wants to be part of its supply chain because OpenAI effectively defines the reference workload for AI,” Singh said. “For AMD, this isn’t just about selling chips. It’s about proximity. Getting in early means access to OpenAI’s models, data patterns, and performance feedback that will shape AMD’s roadmap for years. That’s worth more than immediate cash flow.”

    Singh added that, in this instance, the revenue wouldn’t be nearly as attractive as the long-term potential benefits.

    “OpenAI doesn’t need to part with cash to get that value. The warrant structure is clever: it gives them a financial upside if AMD executes well, and no exposure if it doesn’t. So the money isn’t flowing, because this isn’t a cash-for-silicon deal,” Singh said. “It’s a trade of influence for opportunity. AMD is buying relevance in AI compute, and OpenAI is buying flexibility and optionality for its next growth phase.”

    Singh also addressed one of the biggest quiet truths in the GenAI space, which is that OpenAI is publicly committing to spending a lot of money that it doesn’t appear to have. The company is reported to have current annualized revenue of only $8.6 billion and it has already lost $7.8 billion in the first half of 2025.

    “It’s fair to say that questions about OpenAI’s cash capacity have become a recurring theme. The warrant structure gives OpenAI optionality rather than obligation. If its cash position and priorities allow, it can exercise those rights. If not, there’s no exposure or liability. It’s a smart construct for both sides,” Singh said. “AMD secures strategic alignment with one of the world’s largest AI compute consumers, while OpenAI gains upside participation without committing cash today.”

    Even though OpenAI is receiving the chips, the only potential cashflow goes to OpenAI. OpenAI was given the right to purchase AMD shares, trading at more than $200 on Monday night, for one penny a share. There is a variety of restrictions based on the performance of both companies, but that agreement has the potential to deliver a lot of cash to OpenAI.

    Most observers said that AMD was clearly entering into the negotiations with OpenAI as the weaker party. Cashflow notwithstanding, OpenAI has massive momentum within AI, and AMD has relatively little.

    This story has been corrected to reflect that analyst remarks about the nature of AMD’s software collaboration with OpenAI are speculation.

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → HomeFi

  • Unheard FBI Audio Reveals Art Bell Discussing Threats, Rumors, and Radio Rivalries

    Unheard FBI Audio Reveals Art Bell Discussing Threats, Rumors, and Radio Rivalries

    The FBI’s file on late-night radio host Art Bell has expanded with the release of a previously unheard audio recording, offering a rare, albeit brief and segmented glimpse into the Bureau’s investigation of his harassment complaints and the evidence he once submitted himself. According to the FBI’s written records, Bell had provided an audio tape to agents as part of his report, a piece of evidence now partially released. The recording is difficult to listen to: fragmented, punctuated by long silences, and seemingly edited or redacted, leaving major gaps in conversation and context.

    The Black Vault first obtained and published Bell’s written FBI records in 2023. Those documents, covering investigations between 1998 and 2000, show that Bell contacted the Bureau after receiving messages and communications he considered threatening. The records include interviews with Bell, his associates, and several individuals named in the complaints.

    In one report, agents wrote that Bell “was interviewed at his request concerning threats against his life.” He told investigators that he was host of a syndicated talk show that “airs to approximately 420 stations” and that the program “deals with fact and speculation concerning the paranormal, extra-terrestrials, unidentified flying objects and advanced technology aircraft of the United States.”

    Art Bell

    The file details how Bell began receiving “messages over the Internet from Filipino individuals and groups alleging that Bell had issued derogatory messages against Filipinos.” Bell denied the claims and told the FBI that “the bogus messages were from address ‘KatLover@artbell.com (Art Bell),’” adding that he had reason to believe they originated from a known server. The Bureau confirmed that Bell “maintains a genuine concern for his personal safety.”

    While the written records contain heavy redactions and more than 150+ pages of fully withheld material, they reveal a series of federal inquiries across multiple field offices, documenting both online defamation and what Bell described as targeted harassment.

    The newly released 16-minute audio file, made public in October 2025, captures Bell in conversation with an unidentified individual about the same period of turmoil. His voice conveys frustration and disbelief as he reacts to what had been said about him. At one point, Bell responds directly to an accusation:

    “That is completely false.”

    Later, he alludes to his professional disputes in radio, saying:

    “Talk Radio Network split away when I was purchased by Premier Radio Networks, and Talk Radio Network decided they were gonna compete with me.”

    Throughout the recording, long gaps and muted portions suggest significant redactions or removed audio, consistent with other law enforcement FOIA releases.

    Together, the 2023 and 2025 releases offer a documented glimpse into the final years of the FBI’s correspondence with Bell; a period marked by unsubstantiated threats, online impersonation, and personal anxiety for one of broadcasting’s most distinctive voices.

    ###

    Document Archive

    Art Bell FBI File Release – 2023 – [77 Pages, 4MB]

    Audio Archive

    Art Bell FBI Audio Recording – Released 2025

    Download Raw Audio File, as released by the FBI – [MP3 Files, 23MB] (Transcript below)

    Audio Transcript

    The following audio transcript was created by The Black Vault. Some errors may occur, during the process:

    [00:00:00.000 – 00:00:08.840] Hi, is it?
    [00:00:10.380 – 00:00:10.820] Hi.
    [00:00:16.500 – 00:00:17.820] Well, I’m happy to meet you.
    [00:00:27.820 – 00:00:28.340] Okay.
    [00:00:28.340 – 00:00:29.840] Okay, um…
    [00:00:29.840 – 00:00:36.340] I guess, you know, I’ve heard some rumors, you know, about, um…
    [00:00:37.060 – 00:00:39.600] Maybe, I don’t know, somebody who was in…
    [00:00:39.600 – 00:00:39.980] Life.
    [00:00:40.940 – 00:00:42.160] And that’s about all I know.
    [00:00:42.640 – 00:00:45.340] I’ve had, you know, I’ve had my own disagreements, uh, with…
    [00:00:46.000 – 00:00:48.520] Um, nothing life-shattering.
    [00:00:49.520 – 00:00:50.080] Uh…
    [00:00:50.080 – 00:00:57.340] Okay, okay, then, you know, and, um…
    [00:00:58.340 – 00:01:08.700] So I just kept my mouth shut and didn’t say anything.
    [00:01:12.620 – 00:01:13.060] So…
    [00:01:13.060 – 00:01:13.460] Okay.
    [00:01:28.340 – 00:01:32.340] Okay.
    [00:01:45.120 – 00:01:46.340] Well, you were…
    [00:01:46.340 – 00:01:48.060] Is that correct?
    [00:01:53.940 – 00:01:54.500] Okay.
    [00:01:55.500 – 00:01:56.020] Um…
    [00:01:58.340 – 00:02:08.039] Well, don’t be afraid.
    [00:02:08.440 – 00:02:10.100] What you say with me stops here.
    [00:02:19.640 – 00:02:20.660] I didn’t even know that.
    [00:02:28.340 – 00:02:32.900] Just because he was angry with me?
    [00:02:36.660 – 00:02:37.380] It’s not?
    [00:02:37.380 – 00:02:37.440] No.
    [00:02:50.540 – 00:02:52.520] Right?
    [00:02:52.520 – 00:02:52.600] Right?
    [00:02:52.600 – 00:02:52.620] Right?
    [00:02:52.620 – 00:02:52.660] Right?
    [00:02:52.660 – 00:02:52.720] Right?
    [00:02:52.720 – 00:02:58.320] Right?
    [00:02:58.340 – 00:02:58.920] Mm-hmm. [INAUDIBLE]
    [00:02:58.920 – 00:02:58.960] Right? [INAUDIBLE]
    [00:02:58.960 – 00:02:59.020] He was angry with me. [INAUDIBLE]
    [00:02:59.020 – 00:02:59.480] Uh… [INAUDIBLE]
    [00:02:59.480 – 00:02:59.560] Okay. [INAUDIBLE]
    [00:02:59.560 – 00:02:59.580] Okay. [INAUDIBLE]
    [00:03:06.580 – 00:03:11.160] Well, I don’t understand why you would have this much anger at me or what the, you know, [INAUDIBLE]
    [00:03:11.240 – 00:03:14.080] something that even goes beyond anger because I kind of… [INAUDIBLE]
    [00:03:15.820 – 00:03:17.620] Yeah? [INAUDIBLE]
    [00:03:23.020 – 00:03:24.420] Well… [INAUDIBLE]
    [00:03:24.420 – 00:03:24.460] Yeah. [INAUDIBLE]
    [00:03:24.460 – 00:03:24.540] Yeah. [INAUDIBLE]
    [00:03:24.540 – 00:03:24.660] I… [INAUDIBLE]
    [00:03:24.660 – 00:03:25.100] I… [INAUDIBLE]
    [00:03:25.100 – 00:03:25.420] I… [INAUDIBLE]
    [00:03:25.420 – 00:03:26.320] I… [INAUDIBLE]
    [00:03:26.320 – 00:03:26.600] I… [INAUDIBLE]
    [00:03:26.600 – 00:03:26.740] I… [INAUDIBLE]
    [00:03:26.740 – 00:03:27.000] I… [INAUDIBLE]
    [00:03:27.280 – 00:03:27.540] I… [INAUDIBLE]
    [00:03:27.540 – 00:03:28.260] I… [INAUDIBLE]
    [00:03:28.340 – 00:03:31.180] fire away
    [00:03:38.960 – 00:03:42.220] well are you afraid of
    [00:03:44.940 – 00:03:50.080] okay I guess the obvious question is why I mean
    [00:03:51.900 – 00:03:54.820] okay
    [00:03:58.340 – 00:04:00.400] you
    [00:04:05.220 – 00:04:07.280] you
    [00:04:21.459 – 00:04:23.520] you
    [00:04:28.340 – 00:04:31.620] got sure [INAUDIBLE]
    [00:04:34.560 – 00:04:36.620] you [INAUDIBLE]
    [00:04:41.560 – 00:04:44.560] really [INAUDIBLE]
    [00:04:46.260 – 00:04:48.340] you [INAUDIBLE]
    [00:04:58.340 – 00:05:00.180] My God.
    [00:05:28.340 – 00:05:49.760] That is a…
    [00:05:49.760 – 00:05:53.480] Completely false.
    [00:05:58.340 – 00:06:00.460] What?
    [00:06:13.460 – 00:06:15.120] He won’t have me killed.
    [00:06:20.380 – 00:06:23.160] I didn’t…
    [00:06:28.340 – 00:06:33.880] And he…
    [00:06:33.880 – 00:06:36.880] When they were so unhappy with it,
    [00:06:37.380 – 00:06:38.120] that they…
    [00:06:38.120 – 00:06:39.620] They made that decision.
    [00:06:39.740 – 00:06:40.720] I have nothing to do with that.
    [00:06:41.260 – 00:06:43.320] They disliked me so much,
    [00:06:43.660 – 00:06:44.360] that they…
    [00:06:44.360 – 00:06:45.320] I didn’t.
    [00:06:45.400 – 00:06:46.340] I had nothing to do with it.
    [00:06:48.280 – 00:06:49.480] Not a thing.
    [00:06:53.580 – 00:06:54.340] Not a thing.
    [00:06:58.340 – 00:07:05.740] As in actually burning…
    [00:07:05.740 – 00:07:07.540] Burning my house or…
    [00:07:07.540 – 00:07:10.620] Destroying my career.
    [00:07:11.260 – 00:07:11.760] I see.
    [00:07:13.480 – 00:07:15.880] Well, gee, there was this little thing
    [00:07:15.880 – 00:07:18.460] that got out on the Internet about Filipinos.
    [00:07:19.100 – 00:07:20.320] I wonder if he’s behind that.
    [00:07:21.860 – 00:07:23.240] Now that I think about it,
    [00:07:23.260 – 00:07:24.800] it came from a server.
    [00:07:24.800 – 00:07:24.880] A server.
    [00:07:28.340 – 00:07:29.740] That much we found out for sure.
    [00:07:58.340 – 00:08:16.360] I don’t know.
    [00:08:28.340 – 00:08:32.840] How long were you…
    [00:08:32.840 – 00:08:50.840] There was a rumor…
    [00:08:58.340 – 00:09:01.700] He wanted her killed.
    [00:09:02.580 – 00:09:03.740] The story was that…
    [00:09:05.740 – 00:09:07.580] That’s what was going around.
    [00:09:22.380 – 00:09:23.720] Yeah, that’s what I had heard.
    [00:09:28.340 – 00:09:32.400] And probably up until fairly recently.
    [00:09:32.400 – 00:09:33.400] Otherwise…
    [00:10:02.400 – 00:10:15.180] Or were these, like, you know,
    [00:10:15.300 – 00:10:17.040] diary notes that you were making or something?
    [00:10:23.820 – 00:10:24.900] Holy smokes.
    [00:10:24.900 – 00:10:28.900] Yes.
    [00:10:32.400 – 00:10:42.460] Look, I knew he had some emotional problems
    [00:10:42.460 – 00:10:43.900] because he went through this…
    [00:10:49.500 – 00:10:51.900] Then he was accused of…
    [00:10:53.560 – 00:10:57.280] And there was an investigation I know about all of that.
    [00:10:58.560 – 00:11:00.900] That was back in the days…
    [00:11:00.900 – 00:11:02.380] And I thought that was kind of weird then.
    [00:11:02.380 – 00:11:02.520] I don’t understand.
    [00:11:05.080 – 00:11:05.640] Whew.
    [00:11:15.140 – 00:11:16.500] Well, look, be safe.
    [00:11:17.520 – 00:11:18.620] I said be safe.
    [00:11:18.620 – 00:11:18.660] Be safe.
    [00:11:32.380 – 00:11:40.840] We [INAUDIBLE]
    [00:11:40.840 – 00:11:41.880] seldom talk to either. [INAUDIBLE]
    [00:11:41.880 – 00:11:42.660] Example! [INAUDIBLE]
    [00:11:42.680 – 00:11:43.540] I met a girl who was a lawyer. [INAUDIBLE]
    [00:11:43.540 – 00:11:46.500] Which doesn’t blend in… [INAUDIBLE]
    [00:11:46.500 – 00:11:47.260] That wasn’t really a lawyer! [INAUDIBLE]
    [00:11:47.260 – 00:11:48.100] I think she was… [INAUDIBLE]
    [00:11:48.100 – 00:11:49.560] Well, what about what she did? [INAUDIBLE]
    [00:11:49.760 – 00:11:51.940] She was trouble making. [INAUDIBLE]
    [00:11:51.960 – 00:11:52.160] Sometimes when it’d… [INAUDIBLE]
    [00:11:52.160 – 00:11:53.780] Well, did he actually do anything to her? [INAUDIBLE]
    [00:11:54.000 – 00:11:56.160] I was not interested in any initial differences with her. [INAUDIBLE]
    [00:11:56.240 – 00:11:57.240] But I mean… [INAUDIBLE]
    [00:11:57.300 – 00:11:58.960] I’d like to see her stop [INAUDIBLE]
    [00:11:58.960 – 00:11:59.620] before she finished her first job. [INAUDIBLE]
    [00:11:59.620 – 00:11:59.780] I… [INAUDIBLE]
    [00:11:59.780 – 00:11:59.880] I, um… [INAUDIBLE]
    [00:11:59.880 – 00:12:00.140] Maybe I had an issue with her work. [INAUDIBLE]
    [00:12:00.140 – 00:12:00.480] And that would… [INAUDIBLE]
    [00:12:00.480 – 00:12:00.960] But anyway, [INAUDIBLE]
    [00:12:00.960 – 00:12:01.100] Well… [INAUDIBLE]
    [00:12:01.100 – 00:12:01.120] I just got a run on you. [INAUDIBLE]
    [00:12:01.120 – 00:12:01.180] I’m going to tell you, [INAUDIBLE]
    [00:12:01.180 – 00:12:01.240] Erica. [INAUDIBLE]
    [00:12:01.240 – 00:12:01.540] What does it do? [INAUDIBLE]
    [00:12:01.540 – 00:12:10.960] Yeah, actually, that’s all it was.
    [00:12:10.960 – 00:12:15.240] And actually, I was just upset with him.
    [00:12:31.540 – 00:12:39.320] Yeah, I know.
    [00:12:39.320 – 00:12:40.320] What the hell are you doing?
    [00:12:40.320 – 00:12:49.380] I thought we just agreed.
    [00:12:49.380 – 00:12:50.380] There you are.
    [00:12:50.380 – 00:12:53.440] So then I sort of, for a while, I didn’t call him.
    [00:12:53.440 – 00:12:54.480] I didn’t talk to him.
    [00:12:54.480 – 00:12:59.140] I never said a bad word, because I don’t do that.
    [00:12:59.140 – 00:13:01.380] And then…
    [00:13:01.540 – 00:13:06.780] Yeah, he started…
    [00:13:06.780 – 00:13:19.500] Yeah, and so obviously when he starts…
    [00:13:19.500 – 00:13:23.820] Thinking it’s better just to keep my mouth shut.
    [00:13:23.820 – 00:13:31.040] And so obviously when he’s…
    [00:13:31.040 – 00:13:31.520] And I called him.
    [00:13:31.520 – 00:13:42.280] I called him a couple of times and I said, what are you doing?
    [00:13:42.280 – 00:13:43.280] Or something like that.
    [00:13:43.280 – 00:13:47.080] And it would get out.
    [00:13:47.080 – 00:13:48.080] And I…
    [00:13:48.080 – 00:13:49.080] Yeah.
    [00:13:49.080 – 00:13:54.400] Yeah, you’ve got it.
    [00:14:01.520 – 00:14:14.980] And I got a call from the radio.
    [00:14:14.980 – 00:14:20.280] And as far as that was concerned, that was my…
    [00:14:20.280 – 00:14:22.820] Talk radio networks split away when I was purchased by Premier Radio Networks.
    [00:14:22.820 – 00:14:25.480] And Talk Radio Network decided they were gonna compete with me.
    [00:14:25.480 – 00:14:26.480] Talk Radio Network.
    [00:14:26.480 – 00:14:27.480] And in doing so, they…
    [00:14:27.480 – 00:14:28.480] And…
    [00:14:28.480 – 00:14:29.480] And I…
    [00:14:29.480 – 00:14:30.440] I…
    [00:14:30.440 – 00:14:31.400] Yeah.
    [00:14:31.400 – 00:14:33.860] I had advised you.
    [00:14:51.220 – 00:14:52.060] Not by me.
    [00:14:52.160 – 00:14:53.940] I didn’t have a damn thing.
    [00:15:01.400 – 00:15:03.300] Well, that’s simply untrue.
    [00:15:04.140 – 00:15:05.160] That’s simply untrue.
    [00:15:05.340 – 00:15:07.560] But I guess…
    [00:15:07.560 – 00:15:09.160] So…
    [00:15:09.160 – 00:15:10.500] So be it.
    [00:15:15.300 – 00:15:16.860] Look, I appreciate
    [00:15:16.860 – 00:15:18.960] communicating with you, and
    [00:15:18.960 – 00:15:21.660] I sure do
    [00:15:21.660 – 00:15:22.480] hope you’re okay.
    [00:15:27.480 – 00:15:28.240] Um…
    [00:15:31.400 – 00:15:52.700] Probably you should take the
    [00:15:52.700 – 00:15:53.780] that you have.
    [00:15:59.940 – 00:16:01.240] I appreciate the call.
    [00:16:01.400 – 00:16:02.740] I wouldn’t ask for one.

     

     

    The post Unheard FBI Audio Reveals Art Bell Discussing Threats, Rumors, and Radio Rivalries first appeared on The Black Vault.

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → EconomyBookings

  • Nvidia and Fujitsu team for vertical industry AI projects

    Nvidia and Fujitsu team for vertical industry AI projects

    Nvidia has partnered with Japanese technology giant Fujitsu to work together on vertical industry-specific artificial intelligence projects.

    The collaboration will focus on co-developing and delivering an AI agent platform tailored for industry-specific agents in sectors such as healthcare, manufacturing, and robotics. Through Fujitsu is initially targeting industries in Japan, the company intends to expand globally.

    [ RelatedMore Nvidia news and insights ]

    The two firms also plan to collaborate on integrating the Fujitsu-Monaka CPU family and Nvidia GPUs via Nvidia NVLink Fusion. The combined AI agent platform and computing Is intended to build agents that continuously learn and improve. This will enable cross-industry, self-evolving, full-stack AI infrastructure, overcoming the limitations of general-purpose computing systems.

    Fujitsu said it aims to create a human-AI co-creation cycle and continuous system evolution by integrating high-speed AI computing with human judgment and creativity. It specifically plans to accelerate manufacturing using digital twins and leverage physical AI like robotics for operational automation designed to address labor shortages and stimulate human innovation.

    In addition, Fujitsu said it intends to co-develop a self-evolving AI agent platform with Nvidia for industries that balances high speed and strong security through multi-tenancy support, built on Fujitsu Kozuchi, a cloud-based AI platform and integrating Fujitsu’s AI workload orchestrator technology with the Nvidia’s Dynamo platform.

    The self-evolving AI agents and AI models will be done through using Nvidia’s NeMo and enhancing Fujitsu’s multi-AI agent technologies, including optimization of Fujitsu’s Takane AI model. Deployment of these AI agents will be done as Nvidia NIM microservices.

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → roboform

  • Video Friday: Drone Easily Lands on Speeding Vehicle

    Video Friday: Drone Easily Lands on Speeding Vehicle

    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

    World Robot Summit: 10–12 October 2025, OSAKA, JAPANIROS 2025: 19–25 October 2025, HANGZHOU, CHINA

    Enjoy today’s videos!

    We demonstrate a new landing system that lets drones safely land on moving vehicles at speeds up to 110 kilometers per hour. By combining lightweight shock absorbers with reverse thrust, our approach drastically expands the landing envelope, making it far more robust to wind, timing, and vehicle motion. This breakthrough opens the door to reliable high-speed drone landings in real-world conditions.

    Createk Design Lab ]

    Thanks, Alexis!

    This video presents an academic parody inspired by KAIST’s humanoid robot moonwalk. While KAIST demonstrated the iconic move with robot legs, we humorously reproduced it using the Tesollo DG-5F robot hand. A playful experiment to show that not only humanoid robots but also robotic fingers can “dance.”

    Hangyang University ]

    Twenty years ago, Universal Robots built the first collaborative robot. You turned it into something bigger. Our cobot was never just technology. In your hands, it became something more: a teammate, a problem-solver, a spark for change. From factories to labs, from classrooms to warehouses. That’s the story of the past 20 years. That’s what we celebrate today.

    Universal Robots ]

    The assistive robot Maya, newly developed at DLR, is designed to enable people with severe physical disabilities to lead more independent lives. The new robotic arm is built for seamless wheelchair integration, with optimized kinematics for stowing, ground-level access, and compatibility with standing functions.

    DLR ]

    Contoro and HARCO Lab have launched an open-source initiative, ROS-MCP-Server, which connects AI models (for example, Claude, GPT, Gemini) with robots using a robot operating system and the Model Context Protocol. This software enables AI to communicate with multiple ROS nodes in the language of robots. We believe it will allow robots to perform tasks previously impossible due to limited intelligence, help robotics engineers program robots more efficiently, and enable nonexperts to interact with robots without deep robotics knowledge.

    GitHub ]

    Thanks, Mok!

    Here’s a quick look at the Conference on Robotic Learning (CoRL) exhibit hall, thanks to PNDbotics.

    PNDbotics ]

    Old and busted: sim to real. New hotness: real to sim!

    Paper ]

    Any humanoid video with tennis balls should be obligated to show said humanoid failing to walk over them.

    LimX ]

    Thanks, Jinyan!

    The correct answer to the question “Can you beat a robot arm at tic-tac-toe?” should be “No. No, you cannot.” And you can’t beat a human, either, if they know what they’re doing.

    AgileX ]

    It was an honor to host the team from Microsoft AI as part of their larger educational collaboration with the University of Texas at Austin. During their time here, they shared this wonderful video of our lab facilities.

    The University of Texas at Austin HCRL ]

    Robots aren’t just sci-fi anymore. They’re evolving fast. AI is teaching them how to adapt, learn, and even respond to open-ended questions with advanced intelligence. Aaron Saunders, chief technology officer of Boston Dynamics, explains how this leap is transforming everything, from simple controls to full-motion capabilities. While there are some challenges related to safety and reliability, AI is significantly helping robots become valuable partners at home and on the job.

    IBM ]

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → Ecovacs

  • “Looked Like Iron Man”: Tucson Pilot’s “Drone” Report and Audio Recording Revealed in FAA Records

    “Looked Like Iron Man”: Tucson Pilot’s “Drone” Report and Audio Recording Revealed in FAA Records

    FOIA Release Letter

    On December 17, 2022, a Cessna 172 pilot approaching Tucson, Arizona, reported an unusual airborne object to air traffic controllers. Now, following a Freedom of Information Act (FOIA) request filed by The  Black Vault, the FAA has released official documents and audio transcripts detailing the encounter.

    The FOIA case, filed January 19, 2023, was prompted by a comment on Reddit in response to a Black Vault posting about pilot sightings. A user referenced a recording of air traffic control communications and mentioned a pilot describing a strange red and silver object. That tip led directly to the FOIA request, which the FAA confirmed in a February 28, 2023 disclosure letter responding to “records pertaining to the Red and Silver Ironman Unmanned Aircraft Systems on December 17, 2022, near Tucson, Arizona”.

    The Encounter

    The official FAA Mandatory Occurrence Report (MOR) states that Cessna N21272 “reported a red and silver drone at 80 at the TUS091006 moving east bound. N21272 advised drone looked like Iron Man. Possibly a balloon. No other sightings of drone”.

    A Quality Assurance review further noted that “while descending through 8,400 feet, N21272 reported passing a silver and red drone that was off of their left side and slightly below them. No evasive action was reported”.

    Air Traffic Control Audio

    The released air traffic control audio provides a clearer picture of what the pilot described in real time. At 12:06 p.m. local time, the pilot transmitted:

    “There was something strange that just flew by off the left side. It looks like some type of drone, but it was like red and silver. I couldn’t really tell the altitude, just a little bit below me”.

    Controllers later followed up to clarify the report:

    “And the drone, you said at 8,000 feet?”

    The pilot responded:

    “It was a little bit below me, I was at 8,000, and it wasn’t like a normal looking drone. It looked more vertical than like the quadcopter type and it was silver and red”.

    When asked again to describe the object, the pilot elaborated:

    “Yeah, it was silver and red. It almost reminded me of, like, an Iron Man suit, although not exactly like that, but like a silvery red color. It was pretty weird”.

    ###

    Document Archive

    FOIA Case 2023-03232 Release Package [5 Pages, 0.5MB]

    Loading…

    Taking too long?

    Reload document
    |

    Open in new tab

    Download [650.98 KB]

     

    The post “Looked Like Iron Man”: Tucson Pilot’s “Drone” Report and Audio Recording Revealed in FAA Records first appeared on The Black Vault.

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → Ecovacs