SkyWatchMesh – UAP Intelligence Network

UAP Intelligence Network – Real-time monitoring of official UAP reports from government agencies and scientific institutions worldwide

Blog

  • I test wireless headphones all year and these are my top picks on sale during Amazon’s Big Deal Days

    I test wireless headphones all year and these are my top picks on sale during Amazon’s Big Deal Days

    Nobody likes a crying baby at 35,000 feet, and some people will spend anything just to block them out. Luckily, this year’s Prime Big Deal Days—running Oct. 7-8—include real deals on noise-cancelling headphones that won’t empty your entire travel fund. We’re talking lowest-ever prices on top picks, especially when it comes to ANC performance to shut the world up long enough for you to focus, nap, or just not lose your mind. These are some of the best picks from our travel headphones for airplanes guide, plus more. And right now, they’re even better values if you’re a Prime member.

    Remember, if you don’t have an active Amazon Prime subscription, you can sign up for a trial at this link.

    Apple AirPods Max USB-C — $429 (22% off, was $549)

    Stan Horaczek


    See It

    When they came out, the Apple AirPods Max weren’t just headphones—they were a flex. They broke the Bluetooth active noise-cancelling headphones price ceiling wide open. They’re no longer at the price pinnacle, but that doesn’t mean they’re cheap, so this discount will be appreciated just in time to upgrade your fall travel soundtrack. So there’s still kind of a flex, but more than that, they’re flex-ible. With immersive, adaptive spatial audio, luxurious materials, and ANC that hushes jet engines and officemates alike, these are the headphones that make every playlist sound editorial. The USB-C charging post unlocks lossless wired listening, and firmware updates keep making them smarter. Yes, they’re still hefty. Yes, the case is avant-garde. But for Apple users chasing hi-fi harmony that hands off effortlessly from iPhone to MacBook, the AirPods Max still hit all the right notes. But act fast, because this kind of discount ends faster than your favorite playlist!

    Nothing Headphone (1) — $254 (15% off, was $299)

    Stan Horaczek


    See It

    The Nothing Headphone (1) is all about making a statement—visually and sonically. With its signature semi-transparent design and chunky yet clean UFO-cool curves, this over-ear stunner is impossible to ignore. It looks like it crashlanded from the future. But what’s even more noticeable once you put them on is a punchy, open soundstage (tuned for clarity and conviction by our friends at KEF) and ANC that hushes the chaos without muting the magic. Designed by ex-OnePlus co-founder Carl Pei, it blends futuristic flair with functional fidelity, making it ideal for commuters who care as much about style as they do about stereo separation. Touch controls glide, the battery is built for binging, and every detail—from voice clarity to spatial depth—feels tuned for Gen Z audiophiles with a taste for disruption.

    Baseus Inspire XH1 — $109.99 (27% off, was $149.99)

    Stan Horaczek


    See It

    The Baseus Inspire XH1 is what happens when under-the-radar design meets overachiever aspirstions. Don’t let the chill matte finish fool you—inside, it’s Bose-informed brilliance all the way down. Think plush ANC that erases engine hums and office chaos, paired with warm, textured audio that gives your playlist its own zip code. The XH1 is light on weight and heavy on presence, so built for all-day wear without fatigue. Game mode drops latency, dual-device pairing keeps you in flow, and the price? Quietly disruptive. This isn’t just budget luxury—it’s hi-fi with hustle, tuned for folks who want their audio to slap, not shout.

    Soundcore by Anker Space One — $69.99 (30% off, was $99.99)

    Stan Horaczek


    See It

    The Space One headphones from Soundcore by Anker are down to an all-time low. And they come with hybrid ANC that actually works. We’re not saying it’s Bose-level, but it’s shockingly close for less than 70 bucks. These headphones cut up to 98% of ambient noise, offer 40 hours of hi-res wireless playback with ANC on, and include LDAC support and customizable EQ via the app. There’s also wear detection, multipoint Bluetooth, and a collapsible frame. Flagship tricks, just at budget settings. If you’re commuting and want some peace, this is a solid seatmate.

    Beats Studio Pro – Wireless Bluetooth Noise Cancelling Headphones — $169.95 (50% off, was $349.99)

    Stan Horaczek


    See It

    If you want all-day bass at half off, the Beats Studio Pro headphones are down to 50% off the $349 list. That’s a big drop for headphones that bring the boom and still play nice with both iPhones and Androids. You get solid ANC, 40-hour battery life, and USB-C wired lossless playback if you want to go hi-res (with detailed sound exhibiting 80% less distortion than standard Beats). Personalized Spatial Audio with head tracking adds a layer of fun without turning everything into a gimmick. No, they’re not studio monitors. But they’re comfortable, thumpy, and surprisingly flexible, offering crystal-clear calls. And come with two years of AppleCare+ … Whether you’re commuting or ignoring a coworker, these get the job done—just don’t wait if you want the best price.

    Sennheiser Momentum 4 Wireless Headphones in White — $229.95 (49% off, was $449.95)

    These headphones feel and sound as good as they look.

    Sennheiser

    ON SALE NOW


    See It

    If you’re willing to spend more for a significantly refined sound signature, Sennheiser’s Momentum 4 Wireless headphones are $265.95 in white—their best price since before Christmas. It’s just one colorway, but that’s a real discount on Sennheiser’s top-tier travel cans. These aren’t just good-looking. They’re backed by the velvety house sound and obsessive quality control we saw firsthand at Sennheiser’s audiophile factory in Tullamore, Ireland—you can read about that here. The Momentum 4s carry that lineage with plush comfort, deep ANC, and a warm, articulate response that holds up across long hauls.

    If that’s still too steep, the ACCENTUM Plus is $149.95 (was $249.95), or go leaner with the $99.95 ACCENTUM—both solid budget entries with Sennheiser DNA.

    Sony WH-1000XM4 Wireless Premium Noise Canceling Overhead Headphones — $188 (46% off, was $348)

    Stan Horaczek


    See It

    The Sony WH-1000XM4 headphones are far from the newest, but they’re still some of the most beloved ANC cans out there. Because they still hold up with big, punchy sound, 30-hour battery life, and some smart noise cancellation. Wear detection? Multipoint? Custom EQ? Yep, it’s all in there. Plus, they’re light, comfy, and actually make airplane hum bearable. We reviewed them here and know plenty of people who still reach for them. If you want travel-ready headphones, this is the time to think ahead and skip the airport kiosk earbud regret.

    And, thanks to the recently released WH-1000XM6, you can also enjoy a discount on last-generation WH-1000XM5 headphones, currently $298 (was $399).

    Sonos Ace headphones — $298 (25% off, was $449)

    Stan Horaczek


    See It

    If you’ve made it this far … here’s another top-tier choice with a deep discount. Want to experience immersive spatial audio, whether you’re on the go or the household has gone to bed? Like Sonos soundbars, the Sonos Ace supports the latest Dolby Atmos surround sound tracks through Apple Music and the Sonos app. Whether you’re on Spotify or Netflix, they hit all the right notes—from Addison Rae to K-Pop Demon Hunters to an iconic motif on its 50th anniversary dun dun, duuunnn dunnn. Available in white or black, these headphones also offer flagship-level Adaptive ANC and physical comfort, 30 hours of battery life, and support for lossless listening via USB-C or wirelessly if you have an Arc Ultra soundbar. Thanks to a recent firmware update, you can even use a pair of Ace headphones with one Arc Ultra for a private Jaws viewing party for two. 

    Bowers & Wilkins Px8 — $479 (31% off, was $699)

    Stan Horaczek


    See It

    Perfect for on-the-go audiophiles, The Bowers & Wilkins flagship Px8 wireless headphones feature bespoke 40mm carbon cones—derived from the B&W 700 Series loudspeaker domes (and now trickled down to the Pi8 earbuds)—that are coupled with an optimized basket/motor system. This tilts the sound signature from body blows to landing right on the button. Separation and control are heightened and tightened, tempering unruly transients that can come across as excitement but threaten to trip up accuracy. Angled to attain a uniform alignment between every point of the ear and driver surface, these light-yet-rigid carbon cones are intended for low-distortion (THD+N <0.1%), high-engagement listening. Most impressive is that this precision-engineered, spacious audio is available wherever and whenever you need it, using Bluetooth for a solid connection with support for the SBC, AAC, and aptX Adaptive (with aptX Lossless) for maximum iOS/Android compatibility. There may be a newer version now, but all that means is that these already fabulous headphones are now available at an even more fabulous price.

    Bose QuietComfort Ultra Headphones (Deep Plum) $285 (34% off, was $429)

    The violet colorway is cool without going over the top.

    Bose


    See It

    The Bose QuietComfort Ultra Headphones offer more than just audio output—they’re a full-body exhale. With world-class ANC that hushes the chaos and spatial audio that lifts the mix around your skull like a halo, these headphones deliver immersive sound with hi-fi finesse. The fit is plush without pressure, the build sleek without screaming, and the 24-hour battery life? Chef’s kiss. You can fly cross-country without reaching for a charger. Along the way, multipoint pairing means your laptop and phone don’t have to fight for custody, and Snapdragon Sound support keeps everything crisp over Bluetooth. Whether it’s jazz at midnight or lo-fi at sunrise, these headphones make every moment feel like a well-mastered mood. And, at 30 percent off, this is one of the steepest discounts Bose offers all year on headphones that come in multiple colors to match “The White Album” or “Kind of Blue,” etc.

    TwelveSouth AirFly Pro 2 Bluetooth Adapter — $49.99 (29% off, was $69.99)

    Stan Horaczek


    See It

    Need a way to get the best out of your headphones when you’re traveling? This diminutive dongle is your top travel accessory. It connects to the 3.5mm port on airplane seats, treadmills, and more, to transmit signal wirelessly and conveniently via Bluetooth 5.3 (with aptX HD). Connect up to two sets of headphones, so you can watch the same awful movie as family or friend. You can barely fit your body in that economy row, so loose cables are just asking for trouble. And when you arrive at your destination, it can act as a receiver so you can play your favorite tracks and podcasts in a rental car. This is the deluxe edition, so it also comes with a two-prong International Airline Adapter and a premium travel bag.

    The post I test wireless headphones all year and these are my top picks on sale during Amazon’s Big Deal Days appeared first on Popular Science.

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → hotel-deals

  • 3D and AI: Excellent Fits for the Fashion Industry

    3D and AI: Excellent Fits for the Fashion Industry

    When you’re buying a new item of clothing, you probably don’t give much thought to the design and assembly processes the garment went through before arriving at the store.

    Creating a piece of apparel starts with a designer sketching out an idea. Then a pattern is made, the fabric is chosen and cut, and the garment is sewed. Finally the clothing is packaged and shipped.

    To expedite the process, some apparel companies now use 3D technologies including design software, body scans, visualization, and 3D printers. The tools allow designers to envision their creations in a variety of colors, fabrics, and motifs. Avatars known as digital twins are created to simulate how the clothes will look and fit on different body types. Body scans generate measurements for better-fitting clothing and improved product design.

    Some manufacturers incorporate artificial intelligence to streamline operations, and additional companies likely will explore it as it becomes more accurate.

    Not all garment makers are utilizing 3D technologies to their fullest potential, however.

    To advance 3D technology for designers, manufacturers, and retailers, the 3D Retail Coalition holds an annual challenge that spotlights academic institutions and startups that are leading the way. The contest is cosponsored by the IEEE Standards Association Industry Connections 3D Body Processing program, which works with the clothing industry to create standards for technology that uses 3D scans to create digital models.

    The winners of this year’s contest were selected in June at the PI Apparel Fashion Tech Show, held in New York City.

    The Fashion Institute of Technology (FIT) placed first in the academic category. The New York City school offers programs in design, fashion, art, communications, and business.

    PixaScale won the startup category. Based in Herzogenaurach, Germany, the consultancy assists fashion and consumer goods companies with automating content, managing 3D digital assets, and improving workflows.

    Custom-made clothing by 3D and AI

    Ill-fitting garments, shoes, and accessories are problems for clothing companies. The average return rate worldwide for clothing ordered online is more than 25 percent, according to PrimeAI.

    To make ready-to-wear clothing, designers use grading, a process that takes an initial sample pattern of a base size using established standards and 3D body scans, then makes smaller and larger versions to be mass-produced. But the resulting clothes do not fit everyone.

    Returns, which can be frustrating for shoppers, are costly for clothing companies due to reshipping and restocking expenses.

    Some customers can’t be bothered to send back unwanted items, and they throw them in the garbage, where they end up in landfills.

    “What if we could go back to the days when you would go to a shop, get measured, and someone would custom-make your garment?” posits Leigh LaVange, an assistant professor of technical design and patternmaking at FIT.

    That was the idea behind LaVange’s winning project, Automated Custom Sizing. Her proposal uses 3D technology and AI to produce custom-tailored clothing on demand for all body types. She outlined short- and long-term scalable solutions in her submission.

    “I want to fix our fit problem, but I also realize we can’t do that as an industry without changing the manufacturing process.” —Leigh LaVange

    “I see it [custom sizing] as a solution that can be automated and eventually rolled out across all different types of brands,” she says.

    The short-term proposal involves measuring a person’s base body specifications, such as bust, waist, thighs, biceps, and hips—either manually or from a 3D body scan. An avatar of the customer is then created and entered into a database preloaded with 3D representations of various sizes of the sample garment. The AI program notes the customer’s specs and the existing sizes to determine the best fit. If, for example, the person’s chest matches the medium-size dimensions but the hips are a few millimeters larger, the program still might recommend medium because it determined the material around the hips had enough excess fabric. A rendering of an avatar wearing an item is shown to customers to help them decide whether to make the purchase.

    LaVange says her solution will help improve customer satisfaction and minimize returns.

    Her long-term plan is a truly customized fit. Using 3D body scans, an AI program would determine the necessary adjustments to the pattern based on the customer’s specifications and critical fit points, like the waist, while preserving the original design. The 3D system then would make alterations, which would be rendered on the customer’s avatar for approval. The solution would eliminate excess inventory, LaVange says, because the clothing would be custom-made.

    Because her proposals rely on technologies not currently used by the industry and a different way of interacting with customers, a shift in production would be required, she says.

    “Most manufacturing systems today are set up to produce as many units as possible in a single day,” she says. “I believe there’s a way to produce garments efficiently if you set up your manufacturing facility correctly. I want to fix our fit problem, but I also realize we can’t do that as an industry without changing the manufacturing process.”

    A digital asset management platform

    The winning submission in the startup category, AI-First DAM [digital asset management] as an Intelligent Backbone for Agile Product Development, uses 3D technology and AI to combine components of clothing design into a centralized platform.

    Kristian Sons, chief executive of Pixascale, launched the startup in February. He left Adidas in January after nine years at the company, where he was the technical lead for digital creation.

    Many apparel companies, Sons says, still store their 3D files on employees’ local drives or on Microsoft’s SharePoint, a Web-based document-management system.

    Those methods make things difficult because not everyone has access.

    Sons’ cloud-based platform addresses the issue by sharing digital assets, such as images, videos, 3D models, base styles, and documents, to all parties involved in the process.

    That includes designers, seamstresses, and manufacturers. His system integrates with the client’s file management system, providing access to the most recent images, renderings, and other relevant data.

    His DAM system also includes a library of embellishments such as zippers and buttons, as well as fabric options.

    “Getting this information into a platform that everyone can easily access and can understand what others did really builds a foundation for collaboration.” —Kristian Sons

    “Getting this information into a platform that everyone can easily access and track what others did really builds a foundation for collaboration,” he says.

    Sons also is working on incorporating AI agents and large language models to connect with internal systems and application programming interfaces to autonomously conduct simple research requests.

    That might include suggesting new products or different silhouettes, or modifying the previous season’s offerings with new colors, Sons says.

    “These AI agents certainly will not be perfect, but they are a good starting point so designers don’t have to start from scratch,” he says. “I think using AI agents is super exciting because in the past few years in the fashion industry, we have been talking about how AI would do the creative parts, like designing a product. But now we’re talking about the AI doing the low-level tasks.”

    A demonstration of how Pixascale’s DAM works is on YouTube.

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → Aiper

  • IBM touts agentic AI orchestration, cryptographic risk controls

    IBM touts agentic AI orchestration, cryptographic risk controls

    At its TechXchange 2025 conference this week, IBM took the wraps off a number of software packages that have been updated to help enterprises more efficiently operate their growing AI infrastructures.

    Enterprise software additions include tools for IBM’s watsonX Orchestrate platform, which helps customers build, deploy, and manage AI agents and workflows to automate business operations. In addition, IBM introduced agentic AI capabilities for the mainframe with a new release of watsonx Assistant for Z. IBM also announced a centralized encryption key management tool called Guardium Cryptography Manager to help customers secure AI data and more. 

    IBM watsonx Orchestrate

    IBM watsonx Orchestrate offers more than 500 tools and customizable, domain-specific agents from IBM and third-party contributors. Among the additions to watsonx Orchestrate are AgentOps capabilities that offer real-time monitoring and policy-based controls for observability and governance to ensure agents behave reliably and securely.

    Other new features for Orchestrate include:

    • Agentic workflows that let developers build standardized, reusable AI flows that ensure multiple agents and tools work together. 
    • Support for the low-code, open-source Langflow software developers’ package that lets customers build agents and flows with a drag-and-drop visual builder.
    • An agent governance and observability feature that lets customers develop and see their agent enterprise environment. From here they can monitor usage, success rates and latencies, all from a unified dashboard, IBM stated. Orchestrate will also support production monitoring of AI agents to ensure that predefined guardrails and policies are enforced automatically, preventing issues like prompt injection attacks and unauthorized data access.
    • New pre-built agents for finance and supply chain as well as additional customer service agents. 
    • An expansion of the IBM watsonx Orchestrate portfolio that adds Groq support to IBM watsonx Orchestrate. Groq offers AI-specific ASICs for rapidly building AI inference models.

    “AI agents are designed to act autonomously. But when accuracy, compliance and repeatability are critical, autonomy needs structure. That’s where agentic workflows in watsonx Orchestrate come in. And with the integration of Langflow, users can design, visualize and manage complex flows [and develop] workflows that are predictable, auditable and reusable—giving organizations confidence that critical processes will run correctly every time,” wrote Suzanne Livingston, vice president, product management with IBM Watson Orchestrate, in a blog post

    IBM watsonx Assistant for Z 

    For the mainframe community, IBM announced watsonx Assistant for Z version 3, which includes prebuilt agents designed to help mainframe customers more efficiently use these agents to solve problems, automate repetitive tasks and resolve issues more quickly, according to Tina Tarquinio, vice president, product management for IBM Z and LinuxONE. The agents are powered by IBM Z-specific Retrieval-Augmented Generation (ZRAG) with curated content that accelerates on-boarding and knowledge transfer, Tarquinio said. 

    “These agents can do complex workflows. They can handle more than one task at a time, we can stack the tasks,” Tarquinio said. “In the critical environments that mainframes run in, context, awareness and memory are key. This means that when we are talking to the agents and working through the assistance, they can really understand the train of questions that we’ve been asking, and they can continue to provide answers quickly. For customers who are using these agents to build up new skills and to drive productivity, this is absolutely key,” Tarquinio said.

    IBM Spyre Accelerator

    In other mainframe news, Big Blue announced the general availability of its new Spyre Accelerator – purpose-built hardware designed to bring generative AI capabilities on-premises for IBM z17 and IBM LinuxONE 5. The 32-core Spyre runs on a PCIe card, and additional cards can be added depending on requirements. The Spyre accelerator is an enterprise-grade accelerator designed for AI inferencing tasks with high efficiency and scalability, particularly for complex models and generative AI,

    The IBM z17’s Telum II processor, integrated AI accelerator, and Spyre accelerator are designed to work in unison to support real-time, high-speed AI inferencing and model execution directly on the platform, minimize latency, and eliminate the need to move sensitive data. The idea is to embed AI into mission-critical workloads and support new high-performance applications such as advanced fraud detection, supply chain optimization, and automated decision-making with high performance and security, according to IBM.

    IBM Guardium Cryptography Manager

    IBM also enhanced its Guardium Cryptography Manager software to help customers secure sensitive enterprise data. 

    “Data security is reaching a critical inflection point. Sensitive information is sprawling across hybrid environments, expanding the attack surface and complicating encryption and governance,” wrote Vishal Kamat, vice president of data security for IBM, in a blog post. “At the same time, it is anticipated that quantum computing will introduce significant risks: encryption algorithms – relied upon by the majority of organizations – will be broken, exposing today’s encrypted data.”

    A report from the IBM Institute for Business Value reveals a troubling gap: Only 30% of organizations have completed a cryptographic inventory, leaving most uninformed about both current vulnerabilities and emerging quantum risks, Kamat added. That’s where Cryptography Manager can help.

    IBM Guardium Cryptography Manager 2.0 introduces native remediation capabilities, expanded discovery integrations, and enhanced risk assessment features. These additions empower organizations to move beyond visibility and policy enforcement to active remediation of cryptographic risks, Kamat stated. New features include:

    • Enterprise risk score: Helps provide a cumulative view of the organization’s cryptographic risk posture across the enterprise, enabling prioritization and executive-level visibility.
    • Extended discovery integration brings UI-driven API integrations with vulnerability scanners such as Nessus and Qualys, improving asset coverage and posture visibility.
    • Enhanced policy management adds ready-to-use policies aligned with asset management and cryptographic hygiene, to help improve policy enforcement and audit readiness.
    • New policy-driven automation to trigger remediation workflows for high-risk violations.
    • Detects automation and detection of encryption gaps and initiates remediation requests to protect sensitive data by using each database’s native encryption capabilities.
    • Supports native encryption and key lifecycle management for Oracle, IBM Db2, MySQL, MongoDB, PostgreSQL, IBM Informix, IBM DataStax and Scylla.

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → hotel-deals

  • Network digital twin technology faces headwinds

    Network digital twin technology faces headwinds

    What if there were a way to reduce by as much as 70% the incidence of network outages caused by poorly executed software upgrades or the faulty installation of new hardware? What if there were a way to validate the current state of network configurations and track configuration drift to avoid network downtime, performance degradation or security breaches linked to misconfigurations of firewalls and other mission critical network components?

    By applying digital twin technology, network teams can reap the benefits of modeling complex networks in software rather than what many enterprises do today – spend millions of dollars on a shadow IT testing environment or not test at all.

    Digital twin technology is most commonly used today in manufacturing environments, and while it has immense promise in enterprise network environments, there are hurdles that need to be overcome before it becomes mainstream.

    Digital twin: What it is and what it isn’t

    The way Fabrizio Maccioni describes it, digital twin is analogous to Google Maps.

    First, there’s a basic mapping of the network. And just like Google Maps is able to overlay information, such as driving directions, traffic alerts or locations of gas stations or restaurants, digital twin technology enables network teams to overlay information, such as a software upgrade, a change to firewalls rules, new versions of network operating systems, vendor or tool consolidation, or network changes triggered by mergers and acquisitions.

    Network teams can then run the model, evaluate different approaches, make adjustments, and conduct validation and assurance to make sure any rollout accomplishes its goals and doesn’t cause any problems, explains Maccioni, senior director of product marketing for digital twin vendor Forward Networks.

    However, digital twin technology is not real time. “We don’t change anything. We’re read only. We don’t change the configuration of network devices,” Maccioni says. (Forward Networks does provide integrations with workflow automation vendor ServiceNow and with the open-source automation engine Ansible.)

    Gartner analyst Tim Zimmerman adds: “These tools typically operate on near-real time or snapshot-based data, which supports validation and documentation but limits their usefulness for real-time troubleshooting or active incident response. This distinction is important. While digital twins can improve planning and reduce cost associated with change, they are not currently positioned as operational tools for live network management.”

    “As a result, adoption has been largely limited to large, complex environments that can justify the investment in additional management software,” Zimmerman says.

    What are the benefits of digital twin in networking?

    “Configuration errors are a major cause of network incidents resulting in downtime,” says Zimmerman. “Enterprise networks, as part of a modern change management process, should use digital twin tools to model and test network functionality business rules and policies. This approach will ensure that network capabilities won’t fall short in the age of vendor-driven agile development and updates to operating systems, firmware or functionality.”

    Gartner estimates that organizations using network digital twins to model configuration and software/firmware updates can reduce unplanned outages by 70%.

    Zimmerman adds that 15% of security breaches are caused by cloud misconfigurations or reconfigurations associated with common use cases like migrating an on-prem app to the cloud. He adds that digital twin tools can ensure that network policies don’t conflict with or prevent data flows as applications are migrated to the public cloud. Other use cases cited by Zimmerman include:

    • Capacity planning to model future traffic growth and infrastructure requirements.
    • Incident replay to reconstruct past outages or breaches to analyze root causes.
    • Security posture validation to simulate attack scenarios, as well as testing network segmentation, firewall policies.
    • Simulating boundary conditions that might differ from expected outcomes.

    The top driver for enterprise customers is risk mitigation, says Scott Wheeler, cloud practice lead at Asperitas Consulting, which provides an as-a-service option for network digital twins. “It’s a place to test thing out to make sure the project doesn’t mess everything up.” For example, one enterprise client with a large global network used digital twin technology to model the consolidation of four routing protocols into one. “That implementation went off without a hitch,” says Wheeler.

    Another valuable use case is testing failover scenarios, says Wheeler. Network engineers can design a topology that has alternative traffic paths in case a network component fails, but there’s really no way to stress test the architecture under real world conditions. He says that in one digital twin customer engagement “they found failure scenarios that they never knew existed.”

    Maccioni adds that there are a variety of use cases that are attracting enterprise interest. Some customers start with firewall rules administration, a task that a large enterprise might spend millions of dollars a year on. Once an organization recognizes the benefits of automating firewall rule management, they might branch out into other areas, such as outage prevention, troubleshooting, and compliance.

    “We’re also starting to see security use cases be a driver,” Maccioni says. Digital twin technology can help organizations create a single source of truth that helps eliminate friction between security operations and network operations teams when it comes to troubleshooting.

    What are the barriers to widespread adoption?

    One of the major barriers is that network digital twin is not offered by the major infrastructure vendors or network management vendors as part of their core functionality. That may change, but for now, if you want to deploy digital twin you need to engage with a third-party provider. “This is a whole new project, a whole separate environment. It’s a good-sized effort,” Wheeler explains.

    And there doesn’t seem to be a standard way to accomplish digital twin. For example, Forward Networks uses a proprietary data collection method called Header Space Analysis, which was developed while the founders of the company were at Stanford University. It enables the creation of a virtual copy of a network using configuration data and operational state information. 

    Forward Networks enables customers to perform queries against the model. And it overlays other types of data, such network performance monitoring, in order to facilitate troubleshooting. The snapshot process (collecting and processing the data) can take several hours in a large enterprise network and might be conducted, for example, a couple of times a day. So, the model is current, but not real-time.

    Asperitas uses an open-source framework called EVE-NG (emulated virtual environment – next generation) to reverse engineer the network. Wheeler explains if enterprise network engineers wanted to create a digital twin using EVE-NG, they would have to take on the coding work required to build the virtual network and would also need to constantly update it to reflect changes to the network.

    Wheeler adds that deploying digital twin requires a significant effort, both in terms of complexity and cost. And it is typically limited to modeling the impact of a change involving a single component from a single vendor. Or to a specific part of the network, such as a campus, says Zimmerman.

    Even within a campus environment, Zimmerman has identified three levels of digital twins: The first level is network configuration and parameter/policy validation; the second level is single vendor equipment replacement or upgrade; and the third level is multiple vendor migration or vendor replacement.

    The future of digital twins in networking

    Gartner points out that “enterprise IT leaders continue to face a combination of challenges: increasing network complexity, heightened cybersecurity risk, and a shortage of skilled personnel. In this context, enterprise network digital twins are emerging as a tool to support network resilience and operations planning.”

    But that won’t happen overnight. Gartner expects that in the next 3-5 years, digital twins will be used to model parts of campus networks, and within the next 10 years they will expand to the entire network.

    Maccioni says network digital twin technology adoption had been somewhat slow because the technology represented a new concept for network engineers. “It is now resonating more with customers” as awareness grows and as enterprises begin to allocate budget for digital twin, he adds.

    Wheeler agrees that there are headwinds, including the fact that “you don’t have a lot of push from large network vendors to do it.” But he adds, “If some of those barriers are knocked down, I think you’ll see accelerated adoption.”

    Zimmerman adds that, “for broader adoption, we feel that the ability to model composite networks of individual components (whether it is a single vendor network or ultimately, a network with multiple vendor components) is needed to move the market ahead.”

    However, there’s a huge difference between deploying digital twin in a factory and in a global enterprise network. A manufacturing facility is a controlled environment with a discrete number of devices and a fixed, linear production process. A global network can have tens of thousands of endpoints and is dynamic – end users are mobile, data paths change in real time, etc.

    The ultimate vision, says Zimmerman, is a digital twin that “gives enterprise IT leaders the ability to test day-to-day operational workflows on their existing end-to-end network, simulating any operating system or configuration changes in real time and testing boundary conditions that today must be manually configured.”

    But, he adds, “this may require the processing power of quantum computing and the storage capacity of the cloud.”

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → Surfshark

  • Noncontact Motion Sensor Brings Precision to Manufacturing

    Noncontact Motion Sensor Brings Precision to Manufacturing

    Aeva Technologies, a developer of lidar systems based in Mountain View, Calif., has unveiled the Aeva Eve 1V, a high-precision, noncontact motion sensor built on its frequency modulated continuous wave (FMCW) sensing technology. The company says that the Eve 1V measures an object’s motion with accuracy, repeatability, and reliability—all without ever making contact with the material. That last point is key for the Eve 1V’s intended environment: Industrial manufacturing.

    Today’s manufacturing lines are under pressure to deliver faster production, tighter tolerances, and zero defects, often while working with a wide variety of delicate materials. Traditional tactile tools such as measuring wheels and encoders can slip, wear out, and cause costly downtime. Many noncontact alternatives, while promising, are either too expensive or fall short in accuracy and reliability under real-world conditions, says Mina Rezk, cofounder and chief technology officer at Aeva.

    “Eve 1V was built to solve that exact gap: A compact, eye-safe, noncontact motion sensor that delivers submillimeter-per-second velocity accuracy without touching the material, so manufacturers can eliminate slippage errors, avoid material damage, and reduce maintenance-related downtime, enabling higher yield and more predictable operations,” Rezk says.

    Unlike traditional lidar that sends bursts of light and waits for those bursts to return to make measurements, FMCW continuously emits a low-power laser while sweeping its frequency. By comparing outgoing and returning signals, it detects frequency shifts that reveal both distance and velocity in real time. The additional measurement of an object’s velocity to its position in three-dimensional space makes FMCW a type of 4D lidar.

    Eve 1V is the second member of its Eve 1 family, following the launch of the Eve 1D earlier this year. The Eve 1D is a compact displacement sensor capable of detecting movement at the micrometer scale, roughly 1/100 the thickness of a human hair. “Together, Eve 1D and Eve 1V show how we can take the same FMCW perception platform and tailor it for different industrial needs: Eve 1D for distance measurement and vibration detection, and Eve 1V for precise velocity and length measurement,” Rezk says.

    Future applications could extend into robotics, logistics, and consumer health, where noncontact sensing may enable the detection of microvibrations on human skin for accurate pulse and blood-pressure readings.

    FMCW Lidar for Precision Manufacturing

    The company’s core FMCW architecture, originally developed for long-range 4D lidar for automobiles, can be adjusted through software and optics for highly precise motion sensing at close range in manufacturing, according to Rezk. This flexibility means the system can track extremely slow movements, down to fractions of a millimeter per second, in a factory setting, or it can monitor faster motion over longer distances in other applications.

    By avoiding physical contact, Eve 1V eliminates wear and tear, slippage, contamination, or the need for physical access to the part. “That delivers three practical advantages in a factory: One, maintenance-free operation with no measuring wheels to replace or recalibrate; two, material friendliness—you can measure delicate, soft, or textured surfaces without risk of damage, and three, operational robustness—no slippage errors and fewer stoppages for service,” Rezk says. Put together, that means more uptime, steady throughput, and less scrap, he adds.

    When measuring velocity, engineers often rely on one of three tools: encoders, laser velocimeters, or camera-based systems. Each has its strengths and its drawbacks. Traditional encoders are low-cost but can wear down over time. Laser-based velocity-measurement systems, while precise, tend to be large and expensive, making them difficult to implement widely. And camera-based approaches can work for certain inspection tasks, but they usually require markers, controlled lighting, and complex processing to measure speed accurately.

    Rezk says that the Eve 1V system offers a balance of these options. It provides precise and consistent velocity measurements without contacting material, making it compact, safe, and simple to install. Its outputs are comparable with existing encoder systems, and because it doesn’t rely on physical contact, it requires minimal maintenance.

    This approach helps cut down on wasted energy from slippage, eliminates the need for maintenance tied to parts that wear out, and ultimately lowers long-term operating costs—especially when compared with traditional contact-based systems or expensive laser options.

    This method avoids stitching together frame-by-frame comparisons and resists interference from sunlight, reflections, or ambient light. Built on silicon photonics, it scales from micrometer-level sensing to millimeter-level precision over longer ranges. The result is clean, repeatable data with minimal noise—outperforming legacy lidar and camera-based systems.

    Aeva is expecting to begin full production of the Eve 1V in early 2026. The Eve 1V reveal follows a recent partnership with LG Innotek, a components subsidiary of South Korea’s LG Group, under which Aeva will supply its Atlas Ultra 4D lidar for automobiles, with plans to expand the technology into consumer electronics, robotics, and industrial automation.

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → EconomyBookings

  • Netskope expands ZTNA with device intelligence for IoT/OT environments

    Netskope expands ZTNA with device intelligence for IoT/OT environments

    Netskope this week announced it had updated its universal zero-trust network access (ZTNA) solution to extend secure access capabilities to Internet of Things (IoT) and operational technology (OT) devices that typically cannot run traditional agent software.

    Netskope says the product updates will help organizations address the security challenges of complex hybrid enterprise environments. The Universal ZTNA solution, which comprises Netskope One Private Access and Netskope Device Intelligence, now includes context-aware device intelligence capabilities that automatically discover and classify device risk through the 5G Netskope One Gateway. The company says the updated capabilities enable organizations to implement zero-trust policies for machines and robots that can’t support agent-based security tools.

    “Legacy VPNs, NACs, and early ZTNA tools weren’t designed for the scale, speed, or diversity of today’s enterprises,” said John Martin, chief product officer of Netskope, in a statement. Universal ZTNA, gives organizations a consistent way to secure users and devices, whether they are remote or on the local network, he said. “Through smarter, risk-based policies, embedded protection, and seamless performance, we’re helping organizations cut complexity, reduce risk, and turn secure access into an enabler, rather than a barrier.”

    Robert Arandjelovic, senior director of global solution strategy at Netskope, explains that enterprise organizations are adopting universal ZTNA to expand beyond conventional security service edge (SSE) and ZTNA solutions and more effectively secure users and IoT/OT devices across all technology environments. According to a Gartner report, Universal ZTNA is expected to experience widespread adoption and grow by more than 40% by 2027.

    “Universal ZTNA provides this amazing point of consolidation, and it is because tool sprawl is already very present. If we can kill two birds with one stone, and modernize and simplify at the same time, that’s a huge driver. Security teams are never not going to be under budget pressure. In the security space, there are always more things to do and money to spend on it,” Arandjelovic says.

    The enhanced solution also reflects the ongoing convergence of networking and security technologies. Device Intelligence extends remediation and access control to east-west traffic through integrations with third-party NAC vendors, he says, while the firewall capabilities of Netskope One Gateway and Netskope One SSE provide zero trust enforcement points for north-south traffic.

    Netskope is also introducing embedded Universal ZTNA threat and data protection capabilities that inspect private application traffic for remote and local users. This unified approach addresses threats before they reach the network and safeguards sensitive data across all users and devices, Arandjelovic says.

    Netskope is using AI to streamline ZTNA management through its Netskope One Copilot for Private Access feature. The policy optimization tool uses AI to automate granular policy creation for discovered applications while continuously refining and auditing configurations. This is designed to help organizations accelerate ZTNA adoption, reduce complexity, and scale zero-trust implementations with less risk, according to Netskope.

    The enhanced Universal ZTNA solution, including Netskope One Private Access and Netskope Device Intelligence, is available now. More information is available on the Netskope blog.

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → PaternityLab

  • AMD: Latest news and insights

    AMD: Latest news and insights

    More processor coverage on Network World:
    Intel news and insights | Nvidia news and insights

    AMD continues to make gains in processor and data center markets thanks largely to its EPYC processors, which has chipped away at Intel’s long-standing dominance.  

    According to AMD’s Q1 2025 results, revenue increased 36% over the same quarter in 2024 to $7.4 billion. More specifically, its data center segment revenue jumped by 57% year-over-year to $3.7 billion, driven largely  by demand for EPYC CPUs and growing sales of AMD Instinct GPUs.

    The chip company also recently unveiled and began shipping the Instinct MI350 GPUs and previewed its next-generation, AI-focused MI400 series for future AI racks, underscoring its commitment to an open software ecosystem (ROCm).”

    Finally, AMD’s Client segment revenue jumped 68% to $2.3 billion, fueled by strong demand for its “Zen 5” AMD Ryzen™ processors, including its Ryzen AI 300 series, indicating a strong push for “AI PC” market share.

    This news comes despite AMD facing a projected $1.5 billion revenue hit for fiscal 2025 due to U.S. export restrictions on AI chips to China.

    Latest AMD news and analysis

    AMD/OpenAI pact means new enterprise IT options

    October 7, 2025: Monday’s announcement that OpenAI and AMD have struck a deal could mean that AMD chips may become a viable enterprise IT option. That is good news, not because of AMD quality, which is seen as suboptimal by some, but because of the limits of Nvidia chip availability.

    AMD could be Intel’s next foundry customer

    October 3, 2025: AMD might be the latest Silicon Valley giant to join the Intel bailout parade as there are rumors that AMD is in talks to become an Intel Foundry customer. It’s unknown just how much of AMD’s business would move to Intel. AMD splits its business between TSMC, and GlobalFoundries.

    IBM, AMD team on quantum computing

    August 26, 2025: IBM and AMD are working to blend Big Blue’s quantum computers with the chipmaker’s CPUs, GPUs and FPGAs to build intelligent, quantum-centric, high-performance computers. They plan to demonstrate later this year how IBM quantum computers can work with AMD technologies to deploy hybrid quantum-classical workflows . 

    AMD warns of new Meltdown/Spectre-like CPU bugs

    July 11, 2025: AMD has issued an alert to users of a newly discovered form of side-channel attack similar to the infamous Meltdown and Spectre exploits that dominated the news in 2018. The potential exploits affect the full range of AMD processors – desktop, mobile and data center models — particularly 3rd and 4th generation Epyc server processors.

    DigitalOcean teams with AMD for low-cost GPU access

    June 25, 2025: Cloud infrastructure provider DigitalOcean Holdings announced a collaboration with AMD to provide DigitalOcean customers with low-cost access to AMD Instinct GPUs starting later this year.

    AMD rolls out first Ultra Ethernet-compliant NIC

    June 23, 2025: AMD will be the first to market with a Ultra Ethernet-based networking card, and Oracle will be the first cloud service provider to deploy it. The announcement came at the recent Advancing AI event, where AMD introduced its latest Instinct MI350 series GPUs and announced the MI400X, which will be delivered next year.

    AMD steps up AI competition with Instinct MI350 chips, rack-scale platform

    June 13, 2025: AMD has launched its latest accelerator chips and offered a glimpse into its AI infrastructure strategy, aiming to expand its role in the enterprise market, which Nvidia currently dominates.

    AMD launches new Ryzen Threadripper CPUs to challenge Intel’s workstation dominance

    May 21, 2025: Marking an aggressive push into the professional workstation and high-end desktop (HEDT) segments, AMD launched its latest HPC processors.

    Survey: AMD continues to take server share from Intel

    May 20, 2025: AMD continues to take market share from Intel, growing at a faster rate and closing the gap between the two companies to the narrowest it has ever been.

    AMD, Nvidia partner with Saudi startup to build multi-billion dollar AI service centers

    May 15, 2025: As part of the avalanche of business deals that came from President Trump’s Middle East tour, both AMD and Nvidia have struck multi-billion dollar deals with an emerging Saudi AI firm.

    AMD targets hosting providers with affordable EPYC 4005 processors

    May 14, 2025: AMD launched its latest set of data center processors, targeting hosted IT service providers. The EPYC 4005 series is purpose-built with enterprise-class features and support for modern infrastructure technologies at an affordable price, the company said.

    Jio teams with AMD, Cisco and Nokia to build AI-enabled telecom platform

    March 18, 2025: Jio has teamed up with AMD, Cisco and Nokia to build an AI-enabled platform for telecom networks. The goal is to make networks smarter, more secure and more efficient to help service providers cut costs and develop new services.

    AMD patches microcode security holes after accidental early disclosure

    February 3, 2025: AMD issued two patches for severe microcode security flaws, defects that AMD said “could lead to the loss of Secure Encrypted Virtualization (SEV) protection.” The bugs were inadvertently revealed by a partner.

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → roboform

  • Is It All About The Resources? UFOs, Water Extraction, and Power Blackouts!

    Is It All About The Resources? UFOs, Water Extraction, and Power Blackouts!

    There are many aspects to the UFO and alien question, but the notion that they are here for our raw materials, a consequence of which seemingly results in mass power blackouts, is one that is often left unaddressed. The fact is, there are many such accounts on record indicating that these seemingly otherworldly vehicles are not only using our own power reserves for their own propulsion systems (we might assume) but are also extracting water from our rivers and even water towers, often thousands of gallons at a time.

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → hotel-deals

  • AMD/OpenAI pact means new enterprise IT options

    AMD/OpenAI pact means new enterprise IT options

    Monday’s announcement that OpenAI and AMD have struck a deal, albeit an unusual one without cash commitments, could mean that AMD chips may become a viable enterprise IT option. That is good news, not because of AMD quality, which is seen as suboptimal by some, but because of the limits of Nvidia chip availability.

    The Monday announcement simply said that the two companies would work together and that they had crafted “a 6 gigawatt agreement to power OpenAI’s next-generation AI infrastructure across multiple generations of AMD Instinct GPUs. The first one gigawatt deployment of AMD Instinct MI450 GPUs is set to begin in the second half of 2026.”

     [ Related: More AMD news and insights ]

    That likely means anywhere from 3.5 million to 5 million chips, according to Moor Insights & Strategy. “AMD is now able to seed the market with a lot of its GPUs,” said Matt Kimball, a Moor VP and principal analyst. 

    Nvidia supply limits a big factor

    Under other circumstances, that OpenAI endorsement might not mean much, but enterprise IT executives are finding it increasingly difficult to purchase GPUs from Nvidia, so this gives them a critically needed second source for those chips.

    An AMD spokesperson, Drew Symonds of AMD corporate communications, told Network World in an email that “OpenAI is purchasing the GPUs” but couldn’t specify the amount or whether there was a direct payment in cash. 

    [ RelatedWhat are GPUs? The processing power behind AI ]

    “Best I can refer you to about revenue expectations is a quote in our press release from Jean Hu, CFO, AMD. ‘Our partnership with OpenAI is expected to deliver tens of billions of dollars in revenue for AMD while accelerating OpenAI’s AI infrastructure buildout,’” Symonds wrote. 

    But that doesn’t specify that the dollars referenced would come from OpenAI. Others have interpreted the remark as referring to potential increased revenue from companies buying from AMD because of the OpenAI endorsement.

    Rodolfo Rosini, CEO of Vaire Computing, said the supply problems with Nvidia are absolutely a critical background factor for the AMD-OpenAI deal.

    “There is unbound demand for Nvidia hardware, but a limited supply, and upstream there is a limited supply of wafers from TSMC to Nvidia,” Rosini said. “So now the demand is overspilling into competing offerings, as AI companies can’t stand still while they wait for an allocation.”

    AMD and OpenAI are deepening the hardware and software collaboration that began with the launch of the MI300X in December 2023, they said in a joint statement, which quoted OpenAI President Greg Brockman as saying, “Building the future of AI requires deep collaboration across every layer of the stack.”

    Analysts suggested that collaboration could take the form of OpenAI making improvements to, or even guiding development of, ROCm, a software stack for AMD GPUs that competes with CUDA, Nvidia’s equivalent for its processors.

    Rosini also saw some product weaknesses at AMD playing an outsized role.

    “AMD’s software stack is bad, but that is a bigger issue for training than for inference. They were always viable for enterprise use. They just could not command premium pricing like Nvidia does, and developers preferred [Nvidia’s] CUDA,” Rosini said. “[OpenAI] directing AMD’s software roadmap instead of the management of AMD will be great. Labs like OpenAI know exactly what they want and will be very vocal about it.”

    Chip supplier diversity needed

    Another analyst, Jack Gold, principal analyst for J. Gold Associates, agreed.

    “This is an indication that OpenAI recognizes a need to diversify its processor suppliers, as it continues to expand its data centers. The most advanced Nvidia GPUs are on allocation, with buildouts outpacing supplies,” Gold said. “By solidifying AMD chip supplies through this commitment and investment, OpenAI can continue its massive build out campaign.”

    Gold said that he also expects this to fuel more AMD purchases from enterprises. 

    “It’s highly likely that other major AI players will follow suit and deploy AMD-powered datacenters, even more so than with the current movement, with AMD GPUs seen as the secondary supplier,” Gold said. 

    However, Gold disagreed with Rosini’s poor assessment of AMD software. “AMD software is not all that terrible,” he said, noting that the problem is the popularity of CUDA. He said, “you can’t just take CUDA and put it on an AMD chip,” and that means that OpenAI will have to write “some level of abstraction.” He added, “if I am writing that level of abstraction, do I really care what the underlying chip is?”

    Gold estimated that the chips being delivered are roughly worth $180 billion, assuming that the deal will likely need about 6 million chips to reach the six gigawatts mentioned in the news release and he sees those chips typically selling for about $30,000 each.

    “These guys [enterprise IT executives] are going to look at it seriously, especially if they can’t get Nvidia chips,” Gold said. “An endorsement by OpenAI is worth a lot.”

    Moor’s Kimball’s estimate of the number of chips needed for this deal is lower, suggesting it will be “anywhere from 3.5 million to 5 million GPUs.”

    He said that this deal might help AMD dig itself out of the AI hole it has found itself in for years. 

    “AMD has been struggling to capture market share relative to Nvidia, despite a very good architecture. Hardware-wise it is superior,” said Kimball who disagrees with Rosini’s assessment of AMD’s software. The fact that it is not compatible with CUDA is a weakness for AMD due to CUDA’s popularity among enterprise IT leaders, he said: “It’s been an issue forever. ROCm is its Achilles heel. It is not used and it is not compatible with CUDA.”

    “AMD needs to seed the market and it needs to get some adoption out there, and OpenAI is a great vehicle for that,” Kimball said.

    OpenAI has the leverage today

    What is odd about the agreement is that nothing in the copious amount of detail published — the SEC 8K filing alone has multiple attachments — indicates that OpenAI will be paying any money for these chips, or at least not a specific agreed amount.

    Abhishek Singh, a partner at the Everest Group, sees this deal as not being about money. And Singh also sees it as a very smart move for both OpenAI and AMD.

    “It is asymmetric, isn’t it? And why wouldn’t it be? OpenAI has the leverage right now. Every chipmaker wants to be part of its supply chain because OpenAI effectively defines the reference workload for AI,” Singh said. “For AMD, this isn’t just about selling chips. It’s about proximity. Getting in early means access to OpenAI’s models, data patterns, and performance feedback that will shape AMD’s roadmap for years. That’s worth more than immediate cash flow.”

    Singh added that, in this instance, the revenue wouldn’t be nearly as attractive as the long-term potential benefits.

    “OpenAI doesn’t need to part with cash to get that value. The warrant structure is clever: it gives them a financial upside if AMD executes well, and no exposure if it doesn’t. So the money isn’t flowing, because this isn’t a cash-for-silicon deal,” Singh said. “It’s a trade of influence for opportunity. AMD is buying relevance in AI compute, and OpenAI is buying flexibility and optionality for its next growth phase.”

    Singh also addressed one of the biggest quiet truths in the GenAI space, which is that OpenAI is publicly committing to spending a lot of money that it doesn’t appear to have. The company is reported to have current annualized revenue of only $8.6 billion and it has already lost $7.8 billion in the first half of 2025.

    “It’s fair to say that questions about OpenAI’s cash capacity have become a recurring theme. The warrant structure gives OpenAI optionality rather than obligation. If its cash position and priorities allow, it can exercise those rights. If not, there’s no exposure or liability. It’s a smart construct for both sides,” Singh said. “AMD secures strategic alignment with one of the world’s largest AI compute consumers, while OpenAI gains upside participation without committing cash today.”

    Even though OpenAI is receiving the chips, the only potential cashflow goes to OpenAI. OpenAI was given the right to purchase AMD shares, trading at more than $200 on Monday night, for one penny a share. There is a variety of restrictions based on the performance of both companies, but that agreement has the potential to deliver a lot of cash to OpenAI.

    Most observers said that AMD was clearly entering into the negotiations with OpenAI as the weaker party. Cashflow notwithstanding, OpenAI has massive momentum within AI, and AMD has relatively little.

    This story has been corrected to reflect that analyst remarks about the nature of AMD’s software collaboration with OpenAI are speculation.

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → HomeFi

  • Unheard FBI Audio Reveals Art Bell Discussing Threats, Rumors, and Radio Rivalries

    Unheard FBI Audio Reveals Art Bell Discussing Threats, Rumors, and Radio Rivalries

    The FBI’s file on late-night radio host Art Bell has expanded with the release of a previously unheard audio recording, offering a rare, albeit brief and segmented glimpse into the Bureau’s investigation of his harassment complaints and the evidence he once submitted himself. According to the FBI’s written records, Bell had provided an audio tape to agents as part of his report, a piece of evidence now partially released. The recording is difficult to listen to: fragmented, punctuated by long silences, and seemingly edited or redacted, leaving major gaps in conversation and context.

    The Black Vault first obtained and published Bell’s written FBI records in 2023. Those documents, covering investigations between 1998 and 2000, show that Bell contacted the Bureau after receiving messages and communications he considered threatening. The records include interviews with Bell, his associates, and several individuals named in the complaints.

    In one report, agents wrote that Bell “was interviewed at his request concerning threats against his life.” He told investigators that he was host of a syndicated talk show that “airs to approximately 420 stations” and that the program “deals with fact and speculation concerning the paranormal, extra-terrestrials, unidentified flying objects and advanced technology aircraft of the United States.”

    Art Bell

    The file details how Bell began receiving “messages over the Internet from Filipino individuals and groups alleging that Bell had issued derogatory messages against Filipinos.” Bell denied the claims and told the FBI that “the bogus messages were from address ‘KatLover@artbell.com (Art Bell),’” adding that he had reason to believe they originated from a known server. The Bureau confirmed that Bell “maintains a genuine concern for his personal safety.”

    While the written records contain heavy redactions and more than 150+ pages of fully withheld material, they reveal a series of federal inquiries across multiple field offices, documenting both online defamation and what Bell described as targeted harassment.

    The newly released 16-minute audio file, made public in October 2025, captures Bell in conversation with an unidentified individual about the same period of turmoil. His voice conveys frustration and disbelief as he reacts to what had been said about him. At one point, Bell responds directly to an accusation:

    “That is completely false.”

    Later, he alludes to his professional disputes in radio, saying:

    “Talk Radio Network split away when I was purchased by Premier Radio Networks, and Talk Radio Network decided they were gonna compete with me.”

    Throughout the recording, long gaps and muted portions suggest significant redactions or removed audio, consistent with other law enforcement FOIA releases.

    Together, the 2023 and 2025 releases offer a documented glimpse into the final years of the FBI’s correspondence with Bell; a period marked by unsubstantiated threats, online impersonation, and personal anxiety for one of broadcasting’s most distinctive voices.

    ###

    Document Archive

    Art Bell FBI File Release – 2023 – [77 Pages, 4MB]

    Audio Archive

    Art Bell FBI Audio Recording – Released 2025

    Download Raw Audio File, as released by the FBI – [MP3 Files, 23MB] (Transcript below)

    Audio Transcript

    The following audio transcript was created by The Black Vault. Some errors may occur, during the process:

    [00:00:00.000 – 00:00:08.840] Hi, is it?
    [00:00:10.380 – 00:00:10.820] Hi.
    [00:00:16.500 – 00:00:17.820] Well, I’m happy to meet you.
    [00:00:27.820 – 00:00:28.340] Okay.
    [00:00:28.340 – 00:00:29.840] Okay, um…
    [00:00:29.840 – 00:00:36.340] I guess, you know, I’ve heard some rumors, you know, about, um…
    [00:00:37.060 – 00:00:39.600] Maybe, I don’t know, somebody who was in…
    [00:00:39.600 – 00:00:39.980] Life.
    [00:00:40.940 – 00:00:42.160] And that’s about all I know.
    [00:00:42.640 – 00:00:45.340] I’ve had, you know, I’ve had my own disagreements, uh, with…
    [00:00:46.000 – 00:00:48.520] Um, nothing life-shattering.
    [00:00:49.520 – 00:00:50.080] Uh…
    [00:00:50.080 – 00:00:57.340] Okay, okay, then, you know, and, um…
    [00:00:58.340 – 00:01:08.700] So I just kept my mouth shut and didn’t say anything.
    [00:01:12.620 – 00:01:13.060] So…
    [00:01:13.060 – 00:01:13.460] Okay.
    [00:01:28.340 – 00:01:32.340] Okay.
    [00:01:45.120 – 00:01:46.340] Well, you were…
    [00:01:46.340 – 00:01:48.060] Is that correct?
    [00:01:53.940 – 00:01:54.500] Okay.
    [00:01:55.500 – 00:01:56.020] Um…
    [00:01:58.340 – 00:02:08.039] Well, don’t be afraid.
    [00:02:08.440 – 00:02:10.100] What you say with me stops here.
    [00:02:19.640 – 00:02:20.660] I didn’t even know that.
    [00:02:28.340 – 00:02:32.900] Just because he was angry with me?
    [00:02:36.660 – 00:02:37.380] It’s not?
    [00:02:37.380 – 00:02:37.440] No.
    [00:02:50.540 – 00:02:52.520] Right?
    [00:02:52.520 – 00:02:52.600] Right?
    [00:02:52.600 – 00:02:52.620] Right?
    [00:02:52.620 – 00:02:52.660] Right?
    [00:02:52.660 – 00:02:52.720] Right?
    [00:02:52.720 – 00:02:58.320] Right?
    [00:02:58.340 – 00:02:58.920] Mm-hmm. [INAUDIBLE]
    [00:02:58.920 – 00:02:58.960] Right? [INAUDIBLE]
    [00:02:58.960 – 00:02:59.020] He was angry with me. [INAUDIBLE]
    [00:02:59.020 – 00:02:59.480] Uh… [INAUDIBLE]
    [00:02:59.480 – 00:02:59.560] Okay. [INAUDIBLE]
    [00:02:59.560 – 00:02:59.580] Okay. [INAUDIBLE]
    [00:03:06.580 – 00:03:11.160] Well, I don’t understand why you would have this much anger at me or what the, you know, [INAUDIBLE]
    [00:03:11.240 – 00:03:14.080] something that even goes beyond anger because I kind of… [INAUDIBLE]
    [00:03:15.820 – 00:03:17.620] Yeah? [INAUDIBLE]
    [00:03:23.020 – 00:03:24.420] Well… [INAUDIBLE]
    [00:03:24.420 – 00:03:24.460] Yeah. [INAUDIBLE]
    [00:03:24.460 – 00:03:24.540] Yeah. [INAUDIBLE]
    [00:03:24.540 – 00:03:24.660] I… [INAUDIBLE]
    [00:03:24.660 – 00:03:25.100] I… [INAUDIBLE]
    [00:03:25.100 – 00:03:25.420] I… [INAUDIBLE]
    [00:03:25.420 – 00:03:26.320] I… [INAUDIBLE]
    [00:03:26.320 – 00:03:26.600] I… [INAUDIBLE]
    [00:03:26.600 – 00:03:26.740] I… [INAUDIBLE]
    [00:03:26.740 – 00:03:27.000] I… [INAUDIBLE]
    [00:03:27.280 – 00:03:27.540] I… [INAUDIBLE]
    [00:03:27.540 – 00:03:28.260] I… [INAUDIBLE]
    [00:03:28.340 – 00:03:31.180] fire away
    [00:03:38.960 – 00:03:42.220] well are you afraid of
    [00:03:44.940 – 00:03:50.080] okay I guess the obvious question is why I mean
    [00:03:51.900 – 00:03:54.820] okay
    [00:03:58.340 – 00:04:00.400] you
    [00:04:05.220 – 00:04:07.280] you
    [00:04:21.459 – 00:04:23.520] you
    [00:04:28.340 – 00:04:31.620] got sure [INAUDIBLE]
    [00:04:34.560 – 00:04:36.620] you [INAUDIBLE]
    [00:04:41.560 – 00:04:44.560] really [INAUDIBLE]
    [00:04:46.260 – 00:04:48.340] you [INAUDIBLE]
    [00:04:58.340 – 00:05:00.180] My God.
    [00:05:28.340 – 00:05:49.760] That is a…
    [00:05:49.760 – 00:05:53.480] Completely false.
    [00:05:58.340 – 00:06:00.460] What?
    [00:06:13.460 – 00:06:15.120] He won’t have me killed.
    [00:06:20.380 – 00:06:23.160] I didn’t…
    [00:06:28.340 – 00:06:33.880] And he…
    [00:06:33.880 – 00:06:36.880] When they were so unhappy with it,
    [00:06:37.380 – 00:06:38.120] that they…
    [00:06:38.120 – 00:06:39.620] They made that decision.
    [00:06:39.740 – 00:06:40.720] I have nothing to do with that.
    [00:06:41.260 – 00:06:43.320] They disliked me so much,
    [00:06:43.660 – 00:06:44.360] that they…
    [00:06:44.360 – 00:06:45.320] I didn’t.
    [00:06:45.400 – 00:06:46.340] I had nothing to do with it.
    [00:06:48.280 – 00:06:49.480] Not a thing.
    [00:06:53.580 – 00:06:54.340] Not a thing.
    [00:06:58.340 – 00:07:05.740] As in actually burning…
    [00:07:05.740 – 00:07:07.540] Burning my house or…
    [00:07:07.540 – 00:07:10.620] Destroying my career.
    [00:07:11.260 – 00:07:11.760] I see.
    [00:07:13.480 – 00:07:15.880] Well, gee, there was this little thing
    [00:07:15.880 – 00:07:18.460] that got out on the Internet about Filipinos.
    [00:07:19.100 – 00:07:20.320] I wonder if he’s behind that.
    [00:07:21.860 – 00:07:23.240] Now that I think about it,
    [00:07:23.260 – 00:07:24.800] it came from a server.
    [00:07:24.800 – 00:07:24.880] A server.
    [00:07:28.340 – 00:07:29.740] That much we found out for sure.
    [00:07:58.340 – 00:08:16.360] I don’t know.
    [00:08:28.340 – 00:08:32.840] How long were you…
    [00:08:32.840 – 00:08:50.840] There was a rumor…
    [00:08:58.340 – 00:09:01.700] He wanted her killed.
    [00:09:02.580 – 00:09:03.740] The story was that…
    [00:09:05.740 – 00:09:07.580] That’s what was going around.
    [00:09:22.380 – 00:09:23.720] Yeah, that’s what I had heard.
    [00:09:28.340 – 00:09:32.400] And probably up until fairly recently.
    [00:09:32.400 – 00:09:33.400] Otherwise…
    [00:10:02.400 – 00:10:15.180] Or were these, like, you know,
    [00:10:15.300 – 00:10:17.040] diary notes that you were making or something?
    [00:10:23.820 – 00:10:24.900] Holy smokes.
    [00:10:24.900 – 00:10:28.900] Yes.
    [00:10:32.400 – 00:10:42.460] Look, I knew he had some emotional problems
    [00:10:42.460 – 00:10:43.900] because he went through this…
    [00:10:49.500 – 00:10:51.900] Then he was accused of…
    [00:10:53.560 – 00:10:57.280] And there was an investigation I know about all of that.
    [00:10:58.560 – 00:11:00.900] That was back in the days…
    [00:11:00.900 – 00:11:02.380] And I thought that was kind of weird then.
    [00:11:02.380 – 00:11:02.520] I don’t understand.
    [00:11:05.080 – 00:11:05.640] Whew.
    [00:11:15.140 – 00:11:16.500] Well, look, be safe.
    [00:11:17.520 – 00:11:18.620] I said be safe.
    [00:11:18.620 – 00:11:18.660] Be safe.
    [00:11:32.380 – 00:11:40.840] We [INAUDIBLE]
    [00:11:40.840 – 00:11:41.880] seldom talk to either. [INAUDIBLE]
    [00:11:41.880 – 00:11:42.660] Example! [INAUDIBLE]
    [00:11:42.680 – 00:11:43.540] I met a girl who was a lawyer. [INAUDIBLE]
    [00:11:43.540 – 00:11:46.500] Which doesn’t blend in… [INAUDIBLE]
    [00:11:46.500 – 00:11:47.260] That wasn’t really a lawyer! [INAUDIBLE]
    [00:11:47.260 – 00:11:48.100] I think she was… [INAUDIBLE]
    [00:11:48.100 – 00:11:49.560] Well, what about what she did? [INAUDIBLE]
    [00:11:49.760 – 00:11:51.940] She was trouble making. [INAUDIBLE]
    [00:11:51.960 – 00:11:52.160] Sometimes when it’d… [INAUDIBLE]
    [00:11:52.160 – 00:11:53.780] Well, did he actually do anything to her? [INAUDIBLE]
    [00:11:54.000 – 00:11:56.160] I was not interested in any initial differences with her. [INAUDIBLE]
    [00:11:56.240 – 00:11:57.240] But I mean… [INAUDIBLE]
    [00:11:57.300 – 00:11:58.960] I’d like to see her stop [INAUDIBLE]
    [00:11:58.960 – 00:11:59.620] before she finished her first job. [INAUDIBLE]
    [00:11:59.620 – 00:11:59.780] I… [INAUDIBLE]
    [00:11:59.780 – 00:11:59.880] I, um… [INAUDIBLE]
    [00:11:59.880 – 00:12:00.140] Maybe I had an issue with her work. [INAUDIBLE]
    [00:12:00.140 – 00:12:00.480] And that would… [INAUDIBLE]
    [00:12:00.480 – 00:12:00.960] But anyway, [INAUDIBLE]
    [00:12:00.960 – 00:12:01.100] Well… [INAUDIBLE]
    [00:12:01.100 – 00:12:01.120] I just got a run on you. [INAUDIBLE]
    [00:12:01.120 – 00:12:01.180] I’m going to tell you, [INAUDIBLE]
    [00:12:01.180 – 00:12:01.240] Erica. [INAUDIBLE]
    [00:12:01.240 – 00:12:01.540] What does it do? [INAUDIBLE]
    [00:12:01.540 – 00:12:10.960] Yeah, actually, that’s all it was.
    [00:12:10.960 – 00:12:15.240] And actually, I was just upset with him.
    [00:12:31.540 – 00:12:39.320] Yeah, I know.
    [00:12:39.320 – 00:12:40.320] What the hell are you doing?
    [00:12:40.320 – 00:12:49.380] I thought we just agreed.
    [00:12:49.380 – 00:12:50.380] There you are.
    [00:12:50.380 – 00:12:53.440] So then I sort of, for a while, I didn’t call him.
    [00:12:53.440 – 00:12:54.480] I didn’t talk to him.
    [00:12:54.480 – 00:12:59.140] I never said a bad word, because I don’t do that.
    [00:12:59.140 – 00:13:01.380] And then…
    [00:13:01.540 – 00:13:06.780] Yeah, he started…
    [00:13:06.780 – 00:13:19.500] Yeah, and so obviously when he starts…
    [00:13:19.500 – 00:13:23.820] Thinking it’s better just to keep my mouth shut.
    [00:13:23.820 – 00:13:31.040] And so obviously when he’s…
    [00:13:31.040 – 00:13:31.520] And I called him.
    [00:13:31.520 – 00:13:42.280] I called him a couple of times and I said, what are you doing?
    [00:13:42.280 – 00:13:43.280] Or something like that.
    [00:13:43.280 – 00:13:47.080] And it would get out.
    [00:13:47.080 – 00:13:48.080] And I…
    [00:13:48.080 – 00:13:49.080] Yeah.
    [00:13:49.080 – 00:13:54.400] Yeah, you’ve got it.
    [00:14:01.520 – 00:14:14.980] And I got a call from the radio.
    [00:14:14.980 – 00:14:20.280] And as far as that was concerned, that was my…
    [00:14:20.280 – 00:14:22.820] Talk radio networks split away when I was purchased by Premier Radio Networks.
    [00:14:22.820 – 00:14:25.480] And Talk Radio Network decided they were gonna compete with me.
    [00:14:25.480 – 00:14:26.480] Talk Radio Network.
    [00:14:26.480 – 00:14:27.480] And in doing so, they…
    [00:14:27.480 – 00:14:28.480] And…
    [00:14:28.480 – 00:14:29.480] And I…
    [00:14:29.480 – 00:14:30.440] I…
    [00:14:30.440 – 00:14:31.400] Yeah.
    [00:14:31.400 – 00:14:33.860] I had advised you.
    [00:14:51.220 – 00:14:52.060] Not by me.
    [00:14:52.160 – 00:14:53.940] I didn’t have a damn thing.
    [00:15:01.400 – 00:15:03.300] Well, that’s simply untrue.
    [00:15:04.140 – 00:15:05.160] That’s simply untrue.
    [00:15:05.340 – 00:15:07.560] But I guess…
    [00:15:07.560 – 00:15:09.160] So…
    [00:15:09.160 – 00:15:10.500] So be it.
    [00:15:15.300 – 00:15:16.860] Look, I appreciate
    [00:15:16.860 – 00:15:18.960] communicating with you, and
    [00:15:18.960 – 00:15:21.660] I sure do
    [00:15:21.660 – 00:15:22.480] hope you’re okay.
    [00:15:27.480 – 00:15:28.240] Um…
    [00:15:31.400 – 00:15:52.700] Probably you should take the
    [00:15:52.700 – 00:15:53.780] that you have.
    [00:15:59.940 – 00:16:01.240] I appreciate the call.
    [00:16:01.400 – 00:16:02.740] I wouldn’t ask for one.

     

     

    The post Unheard FBI Audio Reveals Art Bell Discussing Threats, Rumors, and Radio Rivalries first appeared on The Black Vault.

    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → EconomyBookings