SkyWatchMesh – UAP Intelligence Network

UAP Intelligence Network – Real-time monitoring of official UAP reports from government agencies and scientific institutions worldwide

Tag: Technology

  • The Home Depot is blowing out power tools, appliances, and more for the lowest prices of the year during its fall sale

    The Home Depot is blowing out power tools, appliances, and more for the lowest prices of the year during its fall sale

    You’ve almost certainly seen all of the Amazon Prime Big Deal Days coverage floating around, but Bezos isn’t the only game in town right now when it comes to serious savings. The Home Depot has some of its best deals of the year going on, with huge savings on power tools from brands like DeWalt and Milwaukee, as well as appliances, and even smart home gear. These deals only last through October 8th, so go grab what you need before you’re left paying full price.

    Editor’s Picks

    DEWALT 20V MAX XR 1/4" Impact Driver Kit (3-speed, brushless) $129 (41% off)


    See It

    This is a pro-grade driver with fast driving speeds and precise control. The kit includes two batteries (2.0Ah & 1.7Ah), a charger, and a bag—great for everything from deck screws to lag bolts.

    GE 27 cu. ft. French-Door Refrigerator (Fingerprint-Resistant Stainless) $1,498 (42% off)

    Big, family-friendly capacity with an internal water dispenser and ENERGY STAR efficiency. If you’ve been waiting to upgrade the kitchen, a 42% discount on a mainstream GE French-door is a rare steal.

    Ring Wired Doorbell Pro $129.99 (43% off)

    Flagship motion detection with head-to-toe HD video and Bird’s Eye View maps. It’s a straightforward upgrade that boosts home security and integrates cleanly with Alexa routines.

    Category deals

    Power tools & jobsite

    Outdoor power & lawn

    Smart home & security

    Major appliances

    Heating & cooling

    Vacuums & floor care

    Mini fridges & dorm

    The post The Home Depot is blowing out power tools, appliances, and more for the lowest prices of the year during its fall sale appeared first on Popular Science.


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → Ecovacs

  • Zscaler, café-inspired branch networks, and mobile security

    Zscaler, café-inspired branch networks, and mobile security

    I recently attended two stops on Zscaler’s Zenith Live APJ Tour: Melbourne, Australia and Tokyo, Japan. I travel to several US events, including Zenith Live Vegas, but I find it useful to understand technology trends in other parts of the world. The Asia Pacific and Japan region is particularly interesting because of its cultural and geographic diversity and the impact of that on technology deployments. Cloud and remote adoption in this region have outpaced some other geographic areas, for example.

    There were some interesting takeaways from the trip. Here are three of my key impressions.

    The café-like branch model is a viable option

    I recently wrote about the idea that an organization could augment or wholly replace its WAN with a café-like model. When one works remotely, like from a café, they leverage a zero-trust security model where the user can connect only to the resources they need. Juxtapose this with a traditional VPN for remote access where a connected user has unfettered access to everything. With the former, a breached user would have minimal impact; with the latter, it could be disastrous.

    So, if this model works, why not extend it to all users all the time?

    If a retailer has three connected devices in a store, make them all look like connected workers and ditch the traditional SD-WAN model that requires one to bring in network service, connect it to a router and then provide connectivity to all the workers. This would greatly simplify the internal networks for branches to require only a Layer 2 switch and Wi-Fi versus having routers, firewalls and other equipment. At Zenith Live, Zscaler had many customers, including Catholic Education Network (CEnet), MinterEllison and REA Group, that have adopted this model. This is far from a traditional WAN, but for most companies it’s simpler, lower cost and provides better security.

    Zscaler cellular is well aligned with the rise in physical AI

    Anyone who has seen an Nvidia keynote this year has seen CEO Jensen Huang discuss how physical AI is the next wave of AI where connected “things” will have AI applied to them. Sensors become smarter; devices start to move operational technology works with IT seamlessly.

    In Japan, I met with Nathan Howe, senior vice president of innovation and product management for Zscaler, and talked to him about the Zscaler Cellular service. Without getting into the technical nuances, the service works by integrating zero trust into the mobile network. This makes it ideally suited to secure OT devices as these endpoints typically aren’t running Windows or another OS where a security client can be loaded.

    Japan should be a leading region for Zscaler as the country is a leader in IoT deployments. Japan is arguably the global leader in the use of IoT within industrial environments. For example, the adoption rate of AI-based machinery in Japan is 63%, which is significantly higher than the 40% global average. Also, Japan’s government backed “Society 5.0,” is based on the use of AI and IoT.

    Zscaler’s ability to protect connected “things” using its cellular offering is unique and will enable it to catch the rising AI-IoT tide in Japan, which will eventually make its way across the globe.

    AI everywhere drives the need for zero trust everywhere

    Like all events, AI was the primary focus of many of the conversations at the events. While there are many challenges in deploying AI, the top concern remains security, and this is where a shift to zero trust everywhere can help.

    The rise in zero trust was led by VPN replacement because, as I write above, it simplifies a historically complex environment. Securing AI is not just complicated with traditional security models but also impractical from a cost perspective. AI requires data and lots of it, and this has caused companies to rethink their data management strategies. Instead of trying to pull all the company data into a central location, the preferred model is to leave it where it is – on users’ computers, at the edge, in a private cloud and public clouds – and then have the AI models access it when needed. If one were to try and secure this with firewalls, they would need to be deployed everywhere, and, in some locations, such as at an edge, it’s too expensive. Even with an unlimited budget, the operational overhead of keeping the policies up to date would be far too burdensome to make it practical.

    Zero trust everywhere applies the concept of least privilege access and minimizes the “blast radius” of a breach using software. AI has changed computing architectures, which is evolving network deployment models. These infrastructure shifts mandate that companies modernize their approach to security.

    Why organizations need to rethink security

    The overall theme of the event can be summed up from a quote from a Zscaler customer who told me, “Traditional security does not work, has never worked and isn’t ever going to work,” which is why he embraced the concept of zero trust everywhere.

    Despite companies spending, in aggregate, billions of dollars annually on cyber protection, data theft still happens. The idea isn’t to try and stop all breaches, as that leads to the concept of the castle and moat where if the perimeter is compromised, the bad guys now have access to the entire kingdom.

    Zero trust does offer protection against breaches but assumes it may happen. And, when it does, it limits the damage since access is only given to the systems and data the use requires. With AI on the horizon, coupled with more cloud, remote work and an explosion in connected things, it’s imperative companies think about the security problem differently. The border-centric approach no longer works when the border has been diffused everywhere. Zero trust provides a simpler model.


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → EHarmony

  • How Easter Island’s famed heads ‘walked’

    How Easter Island’s famed heads ‘walked’

    Rollers, wooden carts, and even alien life are just a few of the theories of how people moved the iconic moai statues of Easter Island (also called Rapa Nui). These roughly 130,000-pound, 32-foot-high statues somehow made it about 11 miles from the volcanic quarries where they were made to their final positions, over hilly terrain–all without modern technology.

    Now, using 3D modeling and field experiments from archeologists and anthropologists, we might finally have an answer. Rope and “walking” along specially designed roads moved the giant statues, according to a study recently published in the Journal of Archaeological Science.

    “It shows that the Rapa Nui people were incredibly smart. They figured this out,” study co-author and Binghamton University anthropologist Carl Lipo said in a statement. “They’re doing it the way that’s consistent with the resources they have. So it really gives honor to those people, saying, look at what they were able to achieve, and we have a lot to learn from them in these principles.”

    Field experiments revealed that using rope and a small group of people, the people of Rapa Nui could have “walked” the moai statues. CREDIT: Carl Lipo

    What are the moai statues?

    Rap Nui’s moai statues are massive megaliths that were built by the Rapa Nui people in roughly 1400–1650 CE. Most people know them as the Easter Island heads, but these heads do have full bodies. 

    There are about 1,000 of these enormous statues. About 95 percent of them were carved out of tuff ejected from volcano Rano Raraku. Tuff is a compressed volcanic ash and is easy to carve with the stone tools available at the time called toki. 

    The moai statues were built in honor of chieftains and other important people who had died. The statues were placed on rectangular stone platforms called ahu, which serve as tombs. Initially, the moais were made with different characteristics to represent the appearance of the deceased.  

    ‘The physics makes sense’

    In this new study of roughly 1,000 moai statues, a team put this to the test. The same team previously showed that an upright, rocking motion, let the large statues “walk” from their volcanic quarry over to the ceremonial platforms. 

    “Once you get it moving, it isn’t hard at all – people are pulling with one arm. It conserves energy, and it moves really quickly,” said Lipo. “The hard part is getting it rocking in the first place. The question is, if it’s really large, what would it take? Are the things that we saw experimentally consistent with what we would expect from a physics perspective?”

    To see how a larger statue might move, the team created high-resolution 3D models of the moai. With these models, they identified distinctive design features, including wide D-shaped bases and a forward lean. These bases would make them more likely to be moved in a rocking, zig-zagging motion.

    This diagram illustrates the "walking" technique whereby moai were moved along prepared roads through alternating lateral rope pulls while maintaining a forward lean of 5–15° from vertical.
    This diagram illustrates the “walking” technique whereby moai were moved along prepared roads through alternating lateral rope pulls while maintaining a forward lean of 5–15° from vertical. Image: Carl Lipo.

    They then built a 4.35-ton replica moai with the distinct forward-lean design to test out this theory. They could move the moai about 328 feet in 40 minutes, with a team of only 18 people. 

    “The physics makes sense,” said Lipo. “What we saw experimentally actually works. And as it gets bigger, it still works. All the attributes that we see about moving gigantic ones only get more and more consistent the bigger and bigger they get, because it becomes the only way you could move it.”

    Passing the road test

    The roads of Rapa Nui also lend some support to this new theory. At about 14-feet-wide with a concave cross-section, these roads were ideal for stabilizing the statues as they moved.

    “Every time they’re moving a statue, it looks like they’re making a road. The road is part of moving the statue,” said Lipo. “We actually see them overlapping each other, and many parallel versions of them. What they are probably doing is clearing a path, moving it, clearing another, clearing it further, and moving it right in certain sequences. So they’re spending a lot of time on the road part.”

    Example of a road moai that fell and was abandoned after an attempt to re-erect it by excavating under its base, leaving it partially buried at an angle.
    Example of a road moai that fell and was abandoned after an attempt to re-erect it by excavating under its base, leaving it partially buried at an angle. Image: Carl Lipo

    According to the team, there are no other real theories that could explain how the moai were moved. Rapa Nui is notorious for wild theories that have zero evidence, and the team worked to put a real theory to the test. 

    “People have spun all kinds of tales about stuff that’s plausible or possible in some way, but they never go about evaluating the evidence to show that, in fact, you can learn about the past and explain the record that you see in ways that are fully scientific,” said Lipo. “One of the steps is simply saying, ‘Look, we can build an answer here.’”

    The post How Easter Island’s famed heads ‘walked’ appeared first on Popular Science.


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → HomeFi

  • 3D and AI: Excellent Fits for the Fashion Industry

    3D and AI: Excellent Fits for the Fashion Industry

    When you’re buying a new item of clothing, you probably don’t give much thought to the design and assembly processes the garment went through before arriving at the store.

    Creating a piece of apparel starts with a designer sketching out an idea. Then a pattern is made, the fabric is chosen and cut, and the garment is sewed. Finally the clothing is packaged and shipped.

    To expedite the process, some apparel companies now use 3D technologies including design software, body scans, visualization, and 3D printers. The tools allow designers to envision their creations in a variety of colors, fabrics, and motifs. Avatars known as digital twins are created to simulate how the clothes will look and fit on different body types. Body scans generate measurements for better-fitting clothing and improved product design.

    Some manufacturers incorporate artificial intelligence to streamline operations, and additional companies likely will explore it as it becomes more accurate.

    Not all garment makers are utilizing 3D technologies to their fullest potential, however.

    To advance 3D technology for designers, manufacturers, and retailers, the 3D Retail Coalition holds an annual challenge that spotlights academic institutions and startups that are leading the way. The contest is cosponsored by the IEEE Standards Association Industry Connections 3D Body Processing program, which works with the clothing industry to create standards for technology that uses 3D scans to create digital models.

    The winners of this year’s contest were selected in June at the PI Apparel Fashion Tech Show, held in New York City.

    The Fashion Institute of Technology (FIT) placed first in the academic category. The New York City school offers programs in design, fashion, art, communications, and business.

    PixaScale won the startup category. Based in Herzogenaurach, Germany, the consultancy assists fashion and consumer goods companies with automating content, managing 3D digital assets, and improving workflows.

    Custom-made clothing by 3D and AI

    Ill-fitting garments, shoes, and accessories are problems for clothing companies. The average return rate worldwide for clothing ordered online is more than 25 percent, according to PrimeAI.

    To make ready-to-wear clothing, designers use grading, a process that takes an initial sample pattern of a base size using established standards and 3D body scans, then makes smaller and larger versions to be mass-produced. But the resulting clothes do not fit everyone.

    Returns, which can be frustrating for shoppers, are costly for clothing companies due to reshipping and restocking expenses.

    Some customers can’t be bothered to send back unwanted items, and they throw them in the garbage, where they end up in landfills.

    “What if we could go back to the days when you would go to a shop, get measured, and someone would custom-make your garment?” posits Leigh LaVange, an assistant professor of technical design and patternmaking at FIT.

    That was the idea behind LaVange’s winning project, Automated Custom Sizing. Her proposal uses 3D technology and AI to produce custom-tailored clothing on demand for all body types. She outlined short- and long-term scalable solutions in her submission.

    “I want to fix our fit problem, but I also realize we can’t do that as an industry without changing the manufacturing process.” —Leigh LaVange

    “I see it [custom sizing] as a solution that can be automated and eventually rolled out across all different types of brands,” she says.

    The short-term proposal involves measuring a person’s base body specifications, such as bust, waist, thighs, biceps, and hips—either manually or from a 3D body scan. An avatar of the customer is then created and entered into a database preloaded with 3D representations of various sizes of the sample garment. The AI program notes the customer’s specs and the existing sizes to determine the best fit. If, for example, the person’s chest matches the medium-size dimensions but the hips are a few millimeters larger, the program still might recommend medium because it determined the material around the hips had enough excess fabric. A rendering of an avatar wearing an item is shown to customers to help them decide whether to make the purchase.

    LaVange says her solution will help improve customer satisfaction and minimize returns.

    Her long-term plan is a truly customized fit. Using 3D body scans, an AI program would determine the necessary adjustments to the pattern based on the customer’s specifications and critical fit points, like the waist, while preserving the original design. The 3D system then would make alterations, which would be rendered on the customer’s avatar for approval. The solution would eliminate excess inventory, LaVange says, because the clothing would be custom-made.

    Because her proposals rely on technologies not currently used by the industry and a different way of interacting with customers, a shift in production would be required, she says.

    “Most manufacturing systems today are set up to produce as many units as possible in a single day,” she says. “I believe there’s a way to produce garments efficiently if you set up your manufacturing facility correctly. I want to fix our fit problem, but I also realize we can’t do that as an industry without changing the manufacturing process.”

    A digital asset management platform

    The winning submission in the startup category, AI-First DAM [digital asset management] as an Intelligent Backbone for Agile Product Development, uses 3D technology and AI to combine components of clothing design into a centralized platform.

    Kristian Sons, chief executive of Pixascale, launched the startup in February. He left Adidas in January after nine years at the company, where he was the technical lead for digital creation.

    Many apparel companies, Sons says, still store their 3D files on employees’ local drives or on Microsoft’s SharePoint, a Web-based document-management system.

    Those methods make things difficult because not everyone has access.

    Sons’ cloud-based platform addresses the issue by sharing digital assets, such as images, videos, 3D models, base styles, and documents, to all parties involved in the process.

    That includes designers, seamstresses, and manufacturers. His system integrates with the client’s file management system, providing access to the most recent images, renderings, and other relevant data.

    His DAM system also includes a library of embellishments such as zippers and buttons, as well as fabric options.

    “Getting this information into a platform that everyone can easily access and can understand what others did really builds a foundation for collaboration.” —Kristian Sons

    “Getting this information into a platform that everyone can easily access and track what others did really builds a foundation for collaboration,” he says.

    Sons also is working on incorporating AI agents and large language models to connect with internal systems and application programming interfaces to autonomously conduct simple research requests.

    That might include suggesting new products or different silhouettes, or modifying the previous season’s offerings with new colors, Sons says.

    “These AI agents certainly will not be perfect, but they are a good starting point so designers don’t have to start from scratch,” he says. “I think using AI agents is super exciting because in the past few years in the fashion industry, we have been talking about how AI would do the creative parts, like designing a product. But now we’re talking about the AI doing the low-level tasks.”

    A demonstration of how Pixascale’s DAM works is on YouTube.


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → Aiper

  • Network digital twin technology faces headwinds

    Network digital twin technology faces headwinds

    What if there were a way to reduce by as much as 70% the incidence of network outages caused by poorly executed software upgrades or the faulty installation of new hardware? What if there were a way to validate the current state of network configurations and track configuration drift to avoid network downtime, performance degradation or security breaches linked to misconfigurations of firewalls and other mission critical network components?

    By applying digital twin technology, network teams can reap the benefits of modeling complex networks in software rather than what many enterprises do today – spend millions of dollars on a shadow IT testing environment or not test at all.

    Digital twin technology is most commonly used today in manufacturing environments, and while it has immense promise in enterprise network environments, there are hurdles that need to be overcome before it becomes mainstream.

    Digital twin: What it is and what it isn’t

    The way Fabrizio Maccioni describes it, digital twin is analogous to Google Maps.

    First, there’s a basic mapping of the network. And just like Google Maps is able to overlay information, such as driving directions, traffic alerts or locations of gas stations or restaurants, digital twin technology enables network teams to overlay information, such as a software upgrade, a change to firewalls rules, new versions of network operating systems, vendor or tool consolidation, or network changes triggered by mergers and acquisitions.

    Network teams can then run the model, evaluate different approaches, make adjustments, and conduct validation and assurance to make sure any rollout accomplishes its goals and doesn’t cause any problems, explains Maccioni, senior director of product marketing for digital twin vendor Forward Networks.

    However, digital twin technology is not real time. “We don’t change anything. We’re read only. We don’t change the configuration of network devices,” Maccioni says. (Forward Networks does provide integrations with workflow automation vendor ServiceNow and with the open-source automation engine Ansible.)

    Gartner analyst Tim Zimmerman adds: “These tools typically operate on near-real time or snapshot-based data, which supports validation and documentation but limits their usefulness for real-time troubleshooting or active incident response. This distinction is important. While digital twins can improve planning and reduce cost associated with change, they are not currently positioned as operational tools for live network management.”

    “As a result, adoption has been largely limited to large, complex environments that can justify the investment in additional management software,” Zimmerman says.

    What are the benefits of digital twin in networking?

    “Configuration errors are a major cause of network incidents resulting in downtime,” says Zimmerman. “Enterprise networks, as part of a modern change management process, should use digital twin tools to model and test network functionality business rules and policies. This approach will ensure that network capabilities won’t fall short in the age of vendor-driven agile development and updates to operating systems, firmware or functionality.”

    Gartner estimates that organizations using network digital twins to model configuration and software/firmware updates can reduce unplanned outages by 70%.

    Zimmerman adds that 15% of security breaches are caused by cloud misconfigurations or reconfigurations associated with common use cases like migrating an on-prem app to the cloud. He adds that digital twin tools can ensure that network policies don’t conflict with or prevent data flows as applications are migrated to the public cloud. Other use cases cited by Zimmerman include:

    • Capacity planning to model future traffic growth and infrastructure requirements.
    • Incident replay to reconstruct past outages or breaches to analyze root causes.
    • Security posture validation to simulate attack scenarios, as well as testing network segmentation, firewall policies.
    • Simulating boundary conditions that might differ from expected outcomes.

    The top driver for enterprise customers is risk mitigation, says Scott Wheeler, cloud practice lead at Asperitas Consulting, which provides an as-a-service option for network digital twins. “It’s a place to test thing out to make sure the project doesn’t mess everything up.” For example, one enterprise client with a large global network used digital twin technology to model the consolidation of four routing protocols into one. “That implementation went off without a hitch,” says Wheeler.

    Another valuable use case is testing failover scenarios, says Wheeler. Network engineers can design a topology that has alternative traffic paths in case a network component fails, but there’s really no way to stress test the architecture under real world conditions. He says that in one digital twin customer engagement “they found failure scenarios that they never knew existed.”

    Maccioni adds that there are a variety of use cases that are attracting enterprise interest. Some customers start with firewall rules administration, a task that a large enterprise might spend millions of dollars a year on. Once an organization recognizes the benefits of automating firewall rule management, they might branch out into other areas, such as outage prevention, troubleshooting, and compliance.

    “We’re also starting to see security use cases be a driver,” Maccioni says. Digital twin technology can help organizations create a single source of truth that helps eliminate friction between security operations and network operations teams when it comes to troubleshooting.

    What are the barriers to widespread adoption?

    One of the major barriers is that network digital twin is not offered by the major infrastructure vendors or network management vendors as part of their core functionality. That may change, but for now, if you want to deploy digital twin you need to engage with a third-party provider. “This is a whole new project, a whole separate environment. It’s a good-sized effort,” Wheeler explains.

    And there doesn’t seem to be a standard way to accomplish digital twin. For example, Forward Networks uses a proprietary data collection method called Header Space Analysis, which was developed while the founders of the company were at Stanford University. It enables the creation of a virtual copy of a network using configuration data and operational state information. 

    Forward Networks enables customers to perform queries against the model. And it overlays other types of data, such network performance monitoring, in order to facilitate troubleshooting. The snapshot process (collecting and processing the data) can take several hours in a large enterprise network and might be conducted, for example, a couple of times a day. So, the model is current, but not real-time.

    Asperitas uses an open-source framework called EVE-NG (emulated virtual environment – next generation) to reverse engineer the network. Wheeler explains if enterprise network engineers wanted to create a digital twin using EVE-NG, they would have to take on the coding work required to build the virtual network and would also need to constantly update it to reflect changes to the network.

    Wheeler adds that deploying digital twin requires a significant effort, both in terms of complexity and cost. And it is typically limited to modeling the impact of a change involving a single component from a single vendor. Or to a specific part of the network, such as a campus, says Zimmerman.

    Even within a campus environment, Zimmerman has identified three levels of digital twins: The first level is network configuration and parameter/policy validation; the second level is single vendor equipment replacement or upgrade; and the third level is multiple vendor migration or vendor replacement.

    The future of digital twins in networking

    Gartner points out that “enterprise IT leaders continue to face a combination of challenges: increasing network complexity, heightened cybersecurity risk, and a shortage of skilled personnel. In this context, enterprise network digital twins are emerging as a tool to support network resilience and operations planning.”

    But that won’t happen overnight. Gartner expects that in the next 3-5 years, digital twins will be used to model parts of campus networks, and within the next 10 years they will expand to the entire network.

    Maccioni says network digital twin technology adoption had been somewhat slow because the technology represented a new concept for network engineers. “It is now resonating more with customers” as awareness grows and as enterprises begin to allocate budget for digital twin, he adds.

    Wheeler agrees that there are headwinds, including the fact that “you don’t have a lot of push from large network vendors to do it.” But he adds, “If some of those barriers are knocked down, I think you’ll see accelerated adoption.”

    Zimmerman adds that, “for broader adoption, we feel that the ability to model composite networks of individual components (whether it is a single vendor network or ultimately, a network with multiple vendor components) is needed to move the market ahead.”

    However, there’s a huge difference between deploying digital twin in a factory and in a global enterprise network. A manufacturing facility is a controlled environment with a discrete number of devices and a fixed, linear production process. A global network can have tens of thousands of endpoints and is dynamic – end users are mobile, data paths change in real time, etc.

    The ultimate vision, says Zimmerman, is a digital twin that “gives enterprise IT leaders the ability to test day-to-day operational workflows on their existing end-to-end network, simulating any operating system or configuration changes in real time and testing boundary conditions that today must be manually configured.”

    But, he adds, “this may require the processing power of quantum computing and the storage capacity of the cloud.”


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → Surfshark

  • Noncontact Motion Sensor Brings Precision to Manufacturing

    Noncontact Motion Sensor Brings Precision to Manufacturing

    Aeva Technologies, a developer of lidar systems based in Mountain View, Calif., has unveiled the Aeva Eve 1V, a high-precision, noncontact motion sensor built on its frequency modulated continuous wave (FMCW) sensing technology. The company says that the Eve 1V measures an object’s motion with accuracy, repeatability, and reliability—all without ever making contact with the material. That last point is key for the Eve 1V’s intended environment: Industrial manufacturing.

    Today’s manufacturing lines are under pressure to deliver faster production, tighter tolerances, and zero defects, often while working with a wide variety of delicate materials. Traditional tactile tools such as measuring wheels and encoders can slip, wear out, and cause costly downtime. Many noncontact alternatives, while promising, are either too expensive or fall short in accuracy and reliability under real-world conditions, says Mina Rezk, cofounder and chief technology officer at Aeva.

    “Eve 1V was built to solve that exact gap: A compact, eye-safe, noncontact motion sensor that delivers submillimeter-per-second velocity accuracy without touching the material, so manufacturers can eliminate slippage errors, avoid material damage, and reduce maintenance-related downtime, enabling higher yield and more predictable operations,” Rezk says.

    Unlike traditional lidar that sends bursts of light and waits for those bursts to return to make measurements, FMCW continuously emits a low-power laser while sweeping its frequency. By comparing outgoing and returning signals, it detects frequency shifts that reveal both distance and velocity in real time. The additional measurement of an object’s velocity to its position in three-dimensional space makes FMCW a type of 4D lidar.

    Eve 1V is the second member of its Eve 1 family, following the launch of the Eve 1D earlier this year. The Eve 1D is a compact displacement sensor capable of detecting movement at the micrometer scale, roughly 1/100 the thickness of a human hair. “Together, Eve 1D and Eve 1V show how we can take the same FMCW perception platform and tailor it for different industrial needs: Eve 1D for distance measurement and vibration detection, and Eve 1V for precise velocity and length measurement,” Rezk says.

    Future applications could extend into robotics, logistics, and consumer health, where noncontact sensing may enable the detection of microvibrations on human skin for accurate pulse and blood-pressure readings.

    FMCW Lidar for Precision Manufacturing

    The company’s core FMCW architecture, originally developed for long-range 4D lidar for automobiles, can be adjusted through software and optics for highly precise motion sensing at close range in manufacturing, according to Rezk. This flexibility means the system can track extremely slow movements, down to fractions of a millimeter per second, in a factory setting, or it can monitor faster motion over longer distances in other applications.

    By avoiding physical contact, Eve 1V eliminates wear and tear, slippage, contamination, or the need for physical access to the part. “That delivers three practical advantages in a factory: One, maintenance-free operation with no measuring wheels to replace or recalibrate; two, material friendliness—you can measure delicate, soft, or textured surfaces without risk of damage, and three, operational robustness—no slippage errors and fewer stoppages for service,” Rezk says. Put together, that means more uptime, steady throughput, and less scrap, he adds.

    When measuring velocity, engineers often rely on one of three tools: encoders, laser velocimeters, or camera-based systems. Each has its strengths and its drawbacks. Traditional encoders are low-cost but can wear down over time. Laser-based velocity-measurement systems, while precise, tend to be large and expensive, making them difficult to implement widely. And camera-based approaches can work for certain inspection tasks, but they usually require markers, controlled lighting, and complex processing to measure speed accurately.

    Rezk says that the Eve 1V system offers a balance of these options. It provides precise and consistent velocity measurements without contacting material, making it compact, safe, and simple to install. Its outputs are comparable with existing encoder systems, and because it doesn’t rely on physical contact, it requires minimal maintenance.

    This approach helps cut down on wasted energy from slippage, eliminates the need for maintenance tied to parts that wear out, and ultimately lowers long-term operating costs—especially when compared with traditional contact-based systems or expensive laser options.

    This method avoids stitching together frame-by-frame comparisons and resists interference from sunlight, reflections, or ambient light. Built on silicon photonics, it scales from micrometer-level sensing to millimeter-level precision over longer ranges. The result is clean, repeatable data with minimal noise—outperforming legacy lidar and camera-based systems.

    Aeva is expecting to begin full production of the Eve 1V in early 2026. The Eve 1V reveal follows a recent partnership with LG Innotek, a components subsidiary of South Korea’s LG Group, under which Aeva will supply its Atlas Ultra 4D lidar for automobiles, with plans to expand the technology into consumer electronics, robotics, and industrial automation.


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → EconomyBookings

  • Netskope expands ZTNA with device intelligence for IoT/OT environments

    Netskope expands ZTNA with device intelligence for IoT/OT environments

    Netskope this week announced it had updated its universal zero-trust network access (ZTNA) solution to extend secure access capabilities to Internet of Things (IoT) and operational technology (OT) devices that typically cannot run traditional agent software.

    Netskope says the product updates will help organizations address the security challenges of complex hybrid enterprise environments. The Universal ZTNA solution, which comprises Netskope One Private Access and Netskope Device Intelligence, now includes context-aware device intelligence capabilities that automatically discover and classify device risk through the 5G Netskope One Gateway. The company says the updated capabilities enable organizations to implement zero-trust policies for machines and robots that can’t support agent-based security tools.

    “Legacy VPNs, NACs, and early ZTNA tools weren’t designed for the scale, speed, or diversity of today’s enterprises,” said John Martin, chief product officer of Netskope, in a statement. Universal ZTNA, gives organizations a consistent way to secure users and devices, whether they are remote or on the local network, he said. “Through smarter, risk-based policies, embedded protection, and seamless performance, we’re helping organizations cut complexity, reduce risk, and turn secure access into an enabler, rather than a barrier.”

    Robert Arandjelovic, senior director of global solution strategy at Netskope, explains that enterprise organizations are adopting universal ZTNA to expand beyond conventional security service edge (SSE) and ZTNA solutions and more effectively secure users and IoT/OT devices across all technology environments. According to a Gartner report, Universal ZTNA is expected to experience widespread adoption and grow by more than 40% by 2027.

    “Universal ZTNA provides this amazing point of consolidation, and it is because tool sprawl is already very present. If we can kill two birds with one stone, and modernize and simplify at the same time, that’s a huge driver. Security teams are never not going to be under budget pressure. In the security space, there are always more things to do and money to spend on it,” Arandjelovic says.

    The enhanced solution also reflects the ongoing convergence of networking and security technologies. Device Intelligence extends remediation and access control to east-west traffic through integrations with third-party NAC vendors, he says, while the firewall capabilities of Netskope One Gateway and Netskope One SSE provide zero trust enforcement points for north-south traffic.

    Netskope is also introducing embedded Universal ZTNA threat and data protection capabilities that inspect private application traffic for remote and local users. This unified approach addresses threats before they reach the network and safeguards sensitive data across all users and devices, Arandjelovic says.

    Netskope is using AI to streamline ZTNA management through its Netskope One Copilot for Private Access feature. The policy optimization tool uses AI to automate granular policy creation for discovered applications while continuously refining and auditing configurations. This is designed to help organizations accelerate ZTNA adoption, reduce complexity, and scale zero-trust implementations with less risk, according to Netskope.

    The enhanced Universal ZTNA solution, including Netskope One Private Access and Netskope Device Intelligence, is available now. More information is available on the Netskope blog.


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → PaternityLab

  • Nvidia and Fujitsu team for vertical industry AI projects

    Nvidia and Fujitsu team for vertical industry AI projects

    Nvidia has partnered with Japanese technology giant Fujitsu to work together on vertical industry-specific artificial intelligence projects.

    The collaboration will focus on co-developing and delivering an AI agent platform tailored for industry-specific agents in sectors such as healthcare, manufacturing, and robotics. Through Fujitsu is initially targeting industries in Japan, the company intends to expand globally.

    [ RelatedMore Nvidia news and insights ]

    The two firms also plan to collaborate on integrating the Fujitsu-Monaka CPU family and Nvidia GPUs via Nvidia NVLink Fusion. The combined AI agent platform and computing Is intended to build agents that continuously learn and improve. This will enable cross-industry, self-evolving, full-stack AI infrastructure, overcoming the limitations of general-purpose computing systems.

    Fujitsu said it aims to create a human-AI co-creation cycle and continuous system evolution by integrating high-speed AI computing with human judgment and creativity. It specifically plans to accelerate manufacturing using digital twins and leverage physical AI like robotics for operational automation designed to address labor shortages and stimulate human innovation.

    In addition, Fujitsu said it intends to co-develop a self-evolving AI agent platform with Nvidia for industries that balances high speed and strong security through multi-tenancy support, built on Fujitsu Kozuchi, a cloud-based AI platform and integrating Fujitsu’s AI workload orchestrator technology with the Nvidia’s Dynamo platform.

    The self-evolving AI agents and AI models will be done through using Nvidia’s NeMo and enhancing Fujitsu’s multi-AI agent technologies, including optimization of Fujitsu’s Takane AI model. Deployment of these AI agents will be done as Nvidia NIM microservices.


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → roboform

  • Video Friday: Drone Easily Lands on Speeding Vehicle

    Video Friday: Drone Easily Lands on Speeding Vehicle

    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

    World Robot Summit: 10–12 October 2025, OSAKA, JAPAN
    IROS 2025: 19–25 October 2025, HANGZHOU, CHINA

    Enjoy today’s videos!

    We demonstrate a new landing system that lets drones safely land on moving vehicles at speeds up to 110 kilometers per hour. By combining lightweight shock absorbers with reverse thrust, our approach drastically expands the landing envelope, making it far more robust to wind, timing, and vehicle motion. This breakthrough opens the door to reliable high-speed drone landings in real-world conditions.

    [ Createk Design Lab ]

    Thanks, Alexis!

    This video presents an academic parody inspired by KAIST’s humanoid robot moonwalk. While KAIST demonstrated the iconic move with robot legs, we humorously reproduced it using the Tesollo DG-5F robot hand. A playful experiment to show that not only humanoid robots but also robotic fingers can “dance.”

    [ Hangyang University ]

    Twenty years ago, Universal Robots built the first collaborative robot. You turned it into something bigger. Our cobot was never just technology. In your hands, it became something more: a teammate, a problem-solver, a spark for change. From factories to labs, from classrooms to warehouses. That’s the story of the past 20 years. That’s what we celebrate today.

    [ Universal Robots ]

    The assistive robot Maya, newly developed at DLR, is designed to enable people with severe physical disabilities to lead more independent lives. The new robotic arm is built for seamless wheelchair integration, with optimized kinematics for stowing, ground-level access, and compatibility with standing functions.

    [ DLR ]

    Contoro and HARCO Lab have launched an open-source initiative, ROS-MCP-Server, which connects AI models (for example, Claude, GPT, Gemini) with robots using a robot operating system and the Model Context Protocol. This software enables AI to communicate with multiple ROS nodes in the language of robots. We believe it will allow robots to perform tasks previously impossible due to limited intelligence, help robotics engineers program robots more efficiently, and enable nonexperts to interact with robots without deep robotics knowledge.

    [ GitHub ]

    Thanks, Mok!

    Here’s a quick look at the Conference on Robotic Learning (CoRL) exhibit hall, thanks to PNDbotics.

    [ PNDbotics ]

    Old and busted: sim to real. New hotness: real to sim!

    [ Paper ]

    Any humanoid video with tennis balls should be obligated to show said humanoid failing to walk over them.

    [ LimX ]

    Thanks, Jinyan!

    The correct answer to the question “Can you beat a robot arm at tic-tac-toe?” should be “No. No, you cannot.” And you can’t beat a human, either, if they know what they’re doing.

    [ AgileX ]

    It was an honor to host the team from Microsoft AI as part of their larger educational collaboration with the University of Texas at Austin. During their time here, they shared this wonderful video of our lab facilities.

    Moody lighting is second only to random primary-colored lighting when it comes to making a lab look science-y.

    [ The University of Texas at Austin HCRL ]

    Robots aren’t just sci-fi anymore. They’re evolving fast. AI is teaching them how to adapt, learn, and even respond to open-ended questions with advanced intelligence. Aaron Saunders, chief technology officer of Boston Dynamics, explains how this leap is transforming everything, from simple controls to full-motion capabilities. While there are some challenges related to safety and reliability, AI is significantly helping robots become valuable partners at home and on the job.

    [ IBM ]


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → Ecovacs

  • This Mexican Student Is Engineering a Healthier Future

    This Mexican Student Is Engineering a Healthier Future

    Most of us have heard the adage “an ounce of prevention is worth a pound of cure.” But when it comes to personal health, many people overlook preventative measures such as diet and exercise. Instead, they tend to rely on medical professionals to save the day after they’ve gotten sick.

    Ximena Montserrat Ramirez Aguilar is working to change that by educating her fellow Mexicans about how to manage their health so they can avoid undergoing treatment for preventable conditions such as Type 2 diabetes and its associated conditions affecting the eyes, cardiovascular system, brain, heart, kidneys, and other organs.

    Ximena Montserrat Ramirez Aguilar

    MEMBER GRADE:

    Student member

    UNIVERSITY:

    Universidad Autónoma de Nuevo León, in Monterrey, Mexico

    MAJOR:

    Biomedical engineering

    Ramirez envisions her career as advancing health through disease prevention, but she acknowledges that, as an undergraduate, she is still discovering how to turn her vision into reality. A senior studying biomedical engineering at the Universidad Autónoma de Nuevo León (UANL), in Monterrey, Mexico, she is the founding chair of her school’s IEEE Engineering in Medicine and Biology Society (EMBS) student branch. The student member’s research interests in neuroengineering and artificial intelligence are shaping her vision for the future of health care.

    “I’ve always been passionate about technology and health,” she says. “Biomedical engineering is giving me a way to combine these two worlds and work on solutions that make a real difference in people’s lives.”

    Her growing influence in IEEE coupled with her academic achievements signal a promising, influential career.

    From chemistry to caring

    Ramirez was born in Zacatecas, Mexico, known for its silver mines, agriculture, and strong cultural pride. From a young age, she loved science—particularly chemistry—and thrived in schools designated for advanced learners.

    Her first exposure to the health care field came during high school, when she trained as a nursing technician. Her high school curriculum was organized as a co-op program, which included traditional classes alternating with internships in nursing. Ramirez interned at the Hospital Universitario Dr. Jose Eleuterio Gonzalez in Monterrey, Mexico.

    Alternating between the academic and vocational tracks allowed her to graduate with a diploma and a technical degree at the same time. Speaking of her early experiences, she says, “I saw how many patients struggled, not just with their conditions but also with the logistics of seeking and coordinating treatment,” she says. “That made me want to work at the intersection of medicine and innovation.”

    With her father working as a materials engineer and her mother as an accountant, she grew up in a household where technical problem-solving and analytical thinking were part of daily life.

    That blend of influences reinforced her decision to pursue engineering as a career rather than the medical field, she says.

    Exploring neuroengineering and AI

    Since beginning her studies at UANL in 2021, Ramirez has focused on neuroengineering, one of three specializations the school offers. She has explored the role artificial intelligence plays in diagnosing and treating conditions including Alzheimer’s disease, depression, epilepsy, and schizophrenia.

    Through the IEEE mentoring program, she received guidance from global experts including a doctor from India who helped refine her early AI projects.

    Her work quickly evolved from class assignments to projects with real-world potential.

    “The project I’m most excited about has not been published, but it mainly consisted of using convolutional neural networks in medical image processing (MRI) and machine learning in the diagnosis of neurodegenerative diseases,” she says.

    This year she broadened her scope by attending the IEEE International Conference on Robotics and Automation in Atlanta, where she gained exposure to both industrial and academic applications of robotics.

    “In Mexico, people usually don’t think about their health until they’re already sick. I want to focus on using technology and education to keep people healthy.”

    Currently she is an intern at Auna, a health care network in Latin America. She contributes to improving the patient experience in hospitals across Mexico, Colombia, and Peru.

    “I design projects aimed at improving the quality of care and making the hospital intervention more effective for patients across different stages: prevention/wellness, diagnosis, hospitalization, rehabilitation, and post-discharge follow-up,” She declined to provide specific examples, citing medical confidentiality agreements.

    “My internship is about finding ways to make health care not just effective but also more humane,” she says. “It’s about improving processes so patients feel cared for—from the moment they enter the hospital until they leave.”

    Finding leadership and purpose in IEEE

    Ramirez founded the IEEE EMBS student branch in 2023. As chair, she represents the branch at IEEE Region 9 meetings, where she advocates for mentorship opportunities and collaboration with other IEEE groups.

    Through her involvement, she says, she has gained not only technical knowledge but also critical soft skills in leadership, time management, and teamwork.

    “IEEE taught me how to lead with empathy and how to work with people from different backgrounds,” she says. “It has expanded my vision beyond Mexico, showing me challenges and innovations happening all over the world.”

    She says she plans to pursue a master’s degree abroad—potentially in public health or AI for medical devices—and ultimately a Ph.D. Her long-term goal is to launch a business focused on developing health care innovations, specifically in disease prevention.

    A future built on innovation

    For Ramirez, improving health care means more than developing cutting-edge technology. It also involves rethinking how people understand and manage their own health.

    “In Mexico, people usually don’t think about their health until they’re already sick,” she says. “I want to focus on using technology and education to keep people healthy.”

    Her vision is as ambitious as it is personal, rooted in her own journey from Zacatecas to Monterrey and beyond.

    As her career advances, she says, she intends to keep IEEE at the center of her professional life.

    “In IEEE I’ve found a community that challenges me to grow, supports me when I fail, and celebrates when I succeed,” she says. “It’s not just about engineering; it’s about building a better future, together.”


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → contabo