SkyWatchMesh – UAP Intelligence Network

UAP Intelligence Network – Real-time monitoring of official UAP reports from government agencies and scientific institutions worldwide

Tag: Technology

  • Andrew Ng: Unbiggen AI

    Andrew Ng: Unbiggen AI

    Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A.


    Ng’s current efforts are focused on his company
    Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.

    Andrew Ng on…

    The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way?

    Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.

    When you say you want a foundation model for computer vision, what do you mean by that?

    Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.

    What needs to happen for someone to build a foundation model for video?

    Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.

    Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.

    Back to top

    It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.

    Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.

    “In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”
    —Andrew Ng, CEO & Founder, Landing AI

    I remember when my students and I published the first
    NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.

    I expect they’re both convinced now.

    Ng: I think so, yes.

    Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”

    Back to top

    How do you define data-centric AI, and why do you consider it a movement?

    Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.

    When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.

    The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a
    data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up.

    You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?

    Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.

    When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?

    Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.

    “Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”
    —Andrew Ng

    For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.

    Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?

    Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle.

    One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.

    When you talk about engineering the data, what do you mean exactly?

    Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.

    For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.

    Back to top

    What about using synthetic data, is that often a good solution?

    Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.

    Do you mean that synthetic data would allow you to try the model on more data sets?

    Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.

    “In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”
    —Andrew Ng

    Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.

    Back to top

    To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment?

    Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data.

    One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.

    How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?

    Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.

    In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?

    So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.

    Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.

    Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?

    Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.

    Back to top

    This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → Aiper

  • How AI Will Change Chip Design

    How AI Will Change Chip Design

    The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process.

    Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version.

    But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform.

    How is AI currently being used to design the next generation of chips?

    Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.

    Portrait of a woman with blonde-red hair smiling at the camera
    Heather GorrMathWorks

    Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.

    What are the benefits of using AI for chip design?

    Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.

    So it’s like having a digital twin in a sense?

    Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.

    So, it’s going to be more efficient and, as you said, cheaper?

    Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.

    We’ve talked about the benefits. How about the drawbacks?

    Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years.

    Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together.

    One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.

    How can engineers use AI to better prepare and extract insights from hardware or sensor data?

    Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.

    One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.

    What should engineers and designers consider when using AI for chip design?

    Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.

    How do you think AI will affect chip designers’ jobs?

    Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.

    How do you envision the future of AI and chip design?

    Gorr: It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → EconomyBookings

  • Dr. Garry P. Nolan – Dec. 10th, 2021 Transcript: “They May Be From Another Level of Reality That We Don’t Understand”

    Dr. Garry P. Nolan – Dec. 10th, 2021 Transcript: “They May Be From Another Level of Reality That We Don’t Understand”

    “To have a group of scientists who are supposed to be leading thinkers, debase people who are interested in thinking about new ideas is, to me, that’s heretical.”

    ~Dr. Garry P. Nolan

    ~~~

    If you like what you see on my blog, my Twitter and YouTube Channel and appreciate the time and effort, here’s my Patreon, Pay Pal and Venmo.

    ~~~

    Patreon = https://www.patreon.com/ufojoe

    PayPal – ufojoe11@aol.com

    Venmo – www.venmo.com/u/ufojoe

    ~~~

    I read the article and share my take on it. If you go back to the beginning, you’ll see I also read segments from my Dr. Kit Green interview that is related to this.

    ~~~

    Transcript of Dialogue Between Dr. Garry P. Nolan, Jesse Michels on Physiological Effects of UFOs on Humans, and Analysis of Alleged UFO Debris

    ~~~

    Dec 10, 2021

    ~~~

    Below is a transcript made by David Haith (and supplemented by me) of an interview Jesse Michels of the American Alchemy podcast conducted with Dr. Garry P. Nolan of Stanford University, with a brief “appearance” from Dr. Hal Puthoff of EarthTech International. The discussion focused on Nolan’s research and testing of, recovered materials that allegedly came from a UFO, and also, the brain effects and anomalies of people who have experienced close contact with UFOs.

    ~~~

    Jesse Michels (JM):  Dr. Gary Nolan is a well-respected microbiologist and geneticist at Stanford. Along with his PhD students, he spun-up multiple companies that have sold for nine figures.

    ~~~

    ~~~

    JM: I’m here with my friend Dr. Gary Nolan, here at the Nolan Research Lab at Stanford. Very few people, I think, marry traditional science and the study of anomalous kind of heterodox subjects like UFOs and aliens. Where did your interest stem from, originally?

    ~~~

    ~~~

    Dr. Garry P. Nolan (GN): Somewhere, very early on, I started reading science fiction.

    JM:Do you have any favorite authors?

    GN: More recently, Iain Banks and Arthur C. Clarke, obviously amazing.

    JM: And Arthur C. Clarke wrote, “2001: A Space Odyssey,” right?

    GN: Right, right.

    JM: And I always found interesting, because of the monolith in “2001: A Space Odyssey.”

    GN: Yeah.

    JM: Which is this sort of thing placed on Earth that inspires tech innovation. It’s almost like John Mack would talk about a lot of these alien sightings being slightly more advanced, but barely comprehensible tech for the time, almost inspiring tech innovation.

    GN: I often think of it as laying breadcrumbs in a direction.

    JM: What are the areas of microbiology that you’re currently most excited about?

    GN: So, right now, primarily, I would say we’re interested in cancer and understanding how cancer is put together.

    (Clip of the Nolan Lab:  GN: This is a set of robots…you program each station.)

    GNScience, in its essence, and scientists, are capitalists. Most of the biology scientists in the country have some relationship to studying cancer. Why?  Because the NCI, the National Cancer Institute, is one of the biggest institutes in the country for doling out money. So you follow the money. And if there’s no money for doing this research and there’s no positive feedback for it, and if anything, negative feedback, then the science doesn’t advance.

    JM: This is a crucial point. Despite the upper echelon of society actually being pretty interested in UFOs, it suffers from a severe lack of resources. Just look at the main UFO program over the last 15 years out of the Department of Defense. It’s called AATIP or the Advanced Aerospace Threat Identification Program. AATIP has a $22 million budget (It was AAWSAP that had the $22 million budget, not AATIP, and they studied UFOs AND related phenomena ~Joe).  Just compare that to fighter-jet budgets, which often exceed $100 billion. In other words, discovering extraterrestrial life and even propulsion that could be stepwise better than what we currently have, gets less than one percent of the current F-35 budget.

    ~~~

    At the Making Contact online conference in August, Professor Jeffrey Kripal, PhD, of Rice University echoed similar thoughts: Money and resources are needed to get professionals involved and that includes scientists, theologians and philosophers. All need funding. Then we can train young people to study these anomalies. Until that happens, nothing will change.

    ~~~

    GN: From my point of view, when I got involved…the CIA came to my office. I mean, at first I thought it was a joke, I really did. I was looking across the way here at some of the other offices to see if there was a camera. And so, they said, “We asked around, and everybody said that you’ve built the best tool called cyTOF. I was introduced to others who were…I think you people called them “The Invisible College” – it was people like Jacques, people like Hal Puthoff, Eric Davis and Robert Bigelow and Colm Kelleher.

    And then they showed me MRIs of some of these people and most of those people had interactions with UFOs and these were Department of Defense and intelligence people, so supposedly and reasonably, credible individuals. So in looking at the MRIs of some of these people, we noticed an area of the brain that seemed to be disturbed, let’s say, or different in many of these individuals. So it’s an area that I’ve talked about before, between the head of the Caudate and the Putamen, that had increased neural density and it was larger in all these individuals.

    ~~~

    ~~~

    GN: And so you just ask the question, okay, what’s unique about these individuals? Well they’re all highly functioning and you have to make snap decisions. And so, what is that? That’s intuition. One way to explain it would be intuition or just highly intelligent. And then surprisingly, when we looked in the family members, we found that the family members had it, which was fascinating. So that means that structure had a genetic component, whatever it was.

    JM: Here’s a question: Do you have a genetic and phenotypic (relating to the observable characteristics of an individual resulting from the interaction of its genotype with the environment) predisposition to seeing the UFOs? Or, post contact, do you now have a more neuronally dense caudate nucleus and putamen and more psychic?

    GN: No, I don’t think it’s changed. They’re just able to, as you say, see it – they’re able to recognize it for what it might be and not dismiss it.

    JM: Maybe it’s allowing us to kind of widen the doors of our normal, limited scope of perception?

    GN: Right.

    JM: You’re seeing these UFOs that exist kind of interstitially in reality that other people just can’t see…

    GN: Our senses are a filter to stop our brains from being overwhelmed with reality, and so what we see is a limited aspect of everything around us.

    JM: But that is a different model of reality than people currently have today but it’s one I’m sympathetic to, which is that the sensory organs are not necessarily productive, they’re reductive (tending to present a subject or problem in a simplified form).

    GN: Oh yeah, absolutely, no, they’re reductive. Yeah.

    JM: On a default state of almost greater omniscience, but an inability to make sense of things.

    GN: Right. I just don’t know whether or not it is an antenna or anything like that. It just allows us to interpret things better, right? So, for instance, there’s a form of Japanese chess, which is a smaller number of pieces, etc.

    ~~~

    GN: So they took masters in this, they set up brainwave [measurements] to figure out what area of the brain might be involved with intuitive moves…where you, basically, you make the unexpected, but brilliant, correct move, and at those moments, the caudate putamen lit up.

    ~~~

    JM: That’s interesting

    GN: I find that fascinating. And we’re actually working on using both autism and schizophrenics because this area of the brain, in both autism and schizophrenia, can be damaged. But if you think a little bit about it, schizophrenics hear things and see things that nobody else sees. So, are they all crazy?

    JM: Well that goes into the transmission theory. I think schizophrenics just…it’s like a transmitter being broken or oscillating between different frequencies.

    GN: They can’t turn it off.

    JM: They can’t turn it off.

    GN: They can’t turn it off.

    JM: So we went deep on brain structures – the other component of this is materials. I think a lot of people will be incredibly excited that there are even UFO materials that have been possibly left behind..

    GN: So, Jacques Vallée has collected these kinds of materials from all over the world.

    ~~~

    JM Narration and Video – Jacques Vallée is pretty impressive in his own right. He helped develop a computerized map of Mars while at NASA and he developed one of the early versions of the internet called ARPANET with Doug Engelbart. He’s also the inspiration for the French scientist played by Francois Truffaut in Steven Spielberg’s “Close Encounters of the Third Kind.”

    ~~~

    JM Narration and Video Continue: Jacques was the original person who put forth the multi-dimensional hypothesis, the idea that aliens could co-exist alongside us, but remain unseen. For this, he received a lot of backlash from other ufologists. In short, he was too weird even for the weirdest. Jacques Vallée publishes his address online so people who witness UFO crashes across the country can send him the parts. But he has no real way of doing analysis on the parts without sending them to a Dr. Garry Nolan, who can do spectroscopy and real material analysis on them.

    ~~~

    GN: The first question is: Well, what was unique about many of these samples? They were ejected from these objects.

    ~~~

    JM Narration: One of the most interesting samples Nolan mentions is from Ubatuba, Brazil, where a fisherman witnessed an exploding orb off the coast and collected some of the parts.

    ~~~

    GN: It turned out it was magnesium at an extremely high level of purity, but that’s strange because magnesium burns like hell. So, obviously, it had something else in it. So yeah, we did a mass spectrometry analysis of some of those pieces with a highly sensitive instrument, it’s over in the engineering department, called a NanoSIMS. It’s a Secondary Ion Mass Spec(trometer), as it’s called.

    ~~~

    ~~~

    GN: And basically, what it lets you do, is determine, not just the elements, by their mass, but also the isotopes, by their mass. And one of them was anomalous, the magnesium ratios were way off. I mean, not even close to being natural. It’s interesting, right?

    JM So you would never find this in naturally occurring…

    GN: You would never find it in nature.

    JM: And you’d need some sort of centrifuge or something to create that isotope ratio?

    GN: Yes.

    JM: Would that be possible?

    GN: Oh, it’s possible, it’s just expensive beyond, you know…

    JM: Most people don’t have access to a centrifuge…

    GN: …especially when these things were found. And the [more important?] question is: Why would you do it?

    JM: Why would you do it, what’s the motivation?

    GN: What’s the motivation for it? Is it something that they’re using and they need that ratio to accomplish something? Or is it a byproduct of an effect that where they take the natural things and then they’re doing something and this ends up being the outcome. And then when they’re done with it, they go, “Ehh, pffft (makes a noise of spitting something out of his mouth),” and they throw it out, right? Whether or not these objects are trying to show us something or, they don’t care, we see this happening and that, maybe tells you something. You know you can sort of reverse engineer, from first principles, maybe what that is. Nobody that I know has figured it out.

    JM: When you look at the five observables of the Unidentified Aerial Phenomena Task Force, things like trans-medium travel, the ability to sort of break conservation of momentum and stop on a dime. Are these, possibly, isotope ratios that unlock these features?

    GN: Yeah! That’s what you have to come to a conclusion…it’s used in something. Maybe some of these things that we see are not even technological, maybe they are some kind of living thing.

    JM I remember Commander David Fravor of the Nimitz sighting in 2004, him sent looking at the UFO, and it’s almost as if the thing sees that he’s looking at it and it’s conscious and almost breathing. Why isn’t the government immediately funding this research, I mean it feels insane?

    GN: You tell me. I mean, maybe they have done it. Maybe the stuff that we have…somebody is sitting around saying, laughably, “They’re wasting their time on exhaust. We have the engine!”

    If something came from the Andromeda galaxy and it’s a million years ahead of us, it lands on Earth. It has technologies that we don’t understand.

    JM: Some people think that aliens might be us from the future.

    GN: Yeah!

    JM: And if you think about the way we’re evolving, it’s probably smaller bodies, bigger heads…

    GN: Yeah!

    JM: Sort of what you would see a grey alien looking like.

    GN: Yeah. I’ve always been interested in the five percent I don’t know. I’ll publish the ninety-five precent I do know, but I’m always interested in the stuff that I can’t explain because almost every major discovery has been somebody looking at anomalous data and then constructing a new theory of reality, right? And that’s Thomas Kuhn, “The Structure of Scientific Revolutions.” The notion that almost every scientific revolution was fought tooth and nail by the more conservative skeptics saying “You can’t possibly be right.”

    JM: And Thomas Kuhn was friends with John Mack, who was the head of the Harvard psychiatry department, who spent the latter part of his career studying alien abductions.

    GN: I didn’t know that.

    JM: Yeah! And he encouraged him to do the study.

    GN Cool! Well that’s interesting. I’m going to use that in my talks.

    JM: You should.

    GN: You know, that’s, I think, where we’re at right now. The preponderance of evidence, now, and the Department of Defense admitting that these things are real. That the data is real. There’s no conclusions , (but) the data is real.

    ~~~

    The Mack/Kuhn connection was also new to me so let’s take a quick look at what Mack had to say about Thomas Kuhn in this David J Brown interview in 1996.

    David J Brown: “Could you say something about your interaction with Thomas Kuhn, regarding your approach to researching the abduction phenomenon?”

    Dr. John Mack: “I knew Thomas Kuhn as a child because our parents were friends. I used to go there every Christmas for eggnog and liver pate’. When I started doing this work I went to see him, and he was interested. He cautioned me in various ways. He advised me to just collect data, to try to suspend judgment, and look out for the traps of language – like real/unreal, exists/doesn’t exist, happened/didn’t happen, intra-psychic/outside. He advised me to just report – to record what people were feeling and saying. And that’s what I’ve tried to do.

    “The other thing that he said was don’t worry about science, because in this culture science has become a new kind of theology. What you’re really interested in, he said, is trying to learn something and gain knowledge, whether it’ll satisfy science or not. Science prefers to study primarily within the purely material world, he said, but don’t worry about that. Now the other thing he said was to just publish in scientific journals, and don’t write a book. This was because he had gotten so much intense interest and flack around his book that sometimes he was troubled about it. He’s kind of a shy man.”

    David J. Brown: “His book, The Structure of Scientific Revolutions, has became standard introductory reading in virtually every History of Science course in the world.”

    Dr. John Mack: “In some ways he seemed to lament the reception of his book. I don’t think that’s right that he did. His book – as with any popularization of any important and complex concept – is going to be misunderstood by a lot of people who are going to want to cloak themselves in his mantle. But I think that if you have something you want to say, it’s okay to do it in a book.”

    David J. Brown: “Why did you write “Abduction?”

    Dr. John Mack: I didn’t take his advice on that one, and I did write a book. First of all, I couldn’t get what I had to say down in an article, because it’s too complex, and the cases were too elaborate. I wanted to lay out a kind of map of the whole phenomenon as best I could from what I experienced. I thought it was important, regardless of whether these beings are to be taken literally as material entities, or whether they’re something more complex and subtle that crosses over from the unseen into the material world. Whatever it is – daimonic or material reality – it seemed to me important, and a big story that I wanted to report. So that’s what I did.

    ~~~

    Back to the Nolan interview…

    JM: The theory maybe I like best is the Jacques Vallée/Diana Pasulka theory that 1947 Roswell crash represented this dividing line, and before that people were seeing angels, demons, leprechauns, fairies, whatever this sort of local contemporary lore of where the sighting took place was. And then after aliens became something in the zeitgeist, that’s what people started to see. But you’re seeing some sort of kind of proto-architecture of a thing that involves beings and crafts and then you’re recollecting it in this way that is comprehendable to you, given kind of like the noble myth or the myth of the time.

    GN: Right. I use the example of…let’s say that there’s a race of intelligent ants out in your garden. They don’t have a clue what’s going on up in the kitchen, right? They couldn’t understand it if they wanted to and neither could you understand what their communications are. How do you talk to them? Well the first thing you would probably do is make a little thing that looks like an ant and put it there and have it do something. And so, maybe that’s what it is? I mean alien means alien, right? I mean, it’s so far different from us that it’s doing its best to talk to us in ways that it can do. They’re either from another planet in this galaxy or elsewhere – underground or nearby or whatever – and they just show up to look at us and because they’re basically maybe looking at their past. Or they’re interdimensional, or they’re from another level of reality that we don’t understand. All speculation, but fun. You can run your mind down those possibilities and realize how much bigger a universe you live in than what you’re dealing with day to day. 

    To have a group of scientists who are supposed to be leading thinkers, debase people who are interested in thinking about new ideas is, to me, that’s heretical.

    JM: And it feels like it’s gotten worse in terms of established scientists. Like, we talk about the Fermi Paradox, which is like the sort of mental model or like question of like, why don’t we see aliens? That’s Enrico Fermi, that guy created the theoretical underpinnings for splitting the atom, he was in the Manhattan Project. Sort of as conventionally well-regarded as it gets, and he was thinking about aliens in his off time at Los Alamos . So it’s like, why can’t we do that?

    GN: When your mind expands to a certain point, in terms of what you might consider reality to be, other entities live there.

    JM: So, this should be a rallying cry to anybody watching on the financing front, is there any way we can see the materials?

    GN: I mean, I have some in a locked bank account.

    JM: (Laughs)

    GN: Honestly. I don’t have it hanging around here. I can do a video for you and send it.

    JM: This is also an easy flight so I can come back.

    GN: Okay.

    (ONE WEEK LATER)

    JM: Professor Garry Nolan, thanks for having us back. We’re here a week later, very exciting, we have parts of possible UFO crashes, so what’s the background on what we’re looking at now?

    ~~~

    ~~~

    JM Stand Alone Video: Okay, the parts were a little anticlimactic and small, but he claims to have much bigger parts that we can’t see due to national security sensitivities. But let’s just take these three facts combined about the parts that are on the table:

    1. Observers with no real monetary incentive to lie, claimed to see a vehicle that broke the bounds of our current understanding of aerospace limitations.
    2.  The materials contain isotope ratios that do not exist, naturally, on earth
    3. A top Stanford microbiologist isn’t ruling out the fact that these parts could be of extraterrestrial origin.

    Given all that, even though these pieces are small, I think they should get you pretty excited.

    Back to interview…

    JM: And we now know that isotope ratios might have more to do with the properties of the material themselves and the features and what the material can actually do in the physical world than we had previously thought, right?

    GN: Right, correct. The odd thing was that the other piece, which supposedly came from the same event, had exactly the correct isotope ratios as to what you would find on earth. The material up front was, what we would say, is inhomogeneous or partially mixed. It’s kind of like if you were to take chocolate ice cream and vanilla ice cream and then just do a little bit of a swirl, you would see a mixture and we would call that inhomogeneous. Why would you mix some of these elements? There’s actually, again, no good reason, there’s no metal that people normally make that have some of the mixtures that we’ve seen. That’s interesting.

    JM: That’s worthy of investigation.

    GN: Yeah, sure!

    ~~~

    JM Voiceover: So that covers all of the anomalies about the pieces of magnesium coming from Ubatuba.

    ~~~

    JM Voiceover: But what about the other sample on the table? Those are pieces of bismuth.

    ~~~

    JM Voiceover: Nolan actually couldn’t recall how it was procured so we had to call Hal Puthoff to get the full scoop.

    GN: Hi Hal.

    JM Stand Alone Video: Dr. Hal Puthoff has one of the most interesting careers of all time. He was first a laser physicist and then out of Stanford Research Institute, he started the government’s psychic spy program. Since then, he’s been doing frontier tech research and has briefed multiple Presidents on UFOs.

    ~~~

    JM: So do you know the original story of how it was kind of procured, originally?

    Hal Puthoff, PhD (HP): The initial the story was that it was sent anonymously by someone claiming to be an army officer.

    ~~~

    JM Stand Alone Video:  Long story short, this army officer was going through his grandfather’s archives when he found this rare sample.

    ~~~

    HP:  And Then written in the diary was, that it was a piece from Roswell.

    JM Stand Alone Video: True or not, these thin layers of bismuth magnesium are very hard to reproduce. Hal claims that they even have the properties to micro size wave guides for terahertz frequencies.

    HP: It turns out that it reduces the size of the required microwavable guide for terahertz frequencies, down to about 1/30th of the wavelength, which is amazing. So it means you can basically put 30 waveguides in the volume of a single waveguide at terahertz frequencies.

    JM: Got it, thanks a lot, Hal. I really appreciate it.

    ~~~

    For a lot more details, I’m going to include a segment from Dr. Puthoff’s lecture back on February 8th of 2020 in Berkley Springs, West Virginia at the Arlington Institute’s, “Transitions Talk.”

    ~~~

    Dr. Hal Puthoff: “So let me give you a couple of examples. We have one here called ‘Metamaterials for Aerospace Use.’ I can talk about an open source sample, I can’t talk about others. And many of you have probably, if you’ve seen various TV programs and interviews and so on, you’ll know about this.

    “There was a sample that was sent anonymously by a military source. He claimed that his grandfather had been involved in a crash retrieval operation and had gotten some material from it. He didn’t want to identify himself, but he sent it forward for analysis.

    “That’s what it looked like. It was a multi-layered, piece of material. And Linda Howe was the one who got it and began shopping it around and trying to get analysis of it. She’s really a stalwart person to try to find out about this. It had layers of bismuth. The size of that is less than a human hair. Those are the black areas you see through here. And then they were separated by layers of magnesium, which are the lighter areas. And so that’s what the sample actually looked like.

    “There’s been a lot of controversy, a lot of discussion about this because, after all, here you see something that does look like it was in a crash. The thin lines were the bismuth lines.

    “So, what do we really know about this? The chain of custody is non-existent. The provenance is questionable. So for all we know, it could be a hoax, it could be a fraud, it could be some slag from a foundry floor of some factory. But nonetheless, it was an unusual sample, so we decided, okay, well then we should at least take a look at it, have an open mind. Early on, Linda Howe, the researcher who had this sample given to her by Art Bell, went around to many institutions and groups to try to get some analysis. First of all, a survey of academic publications, interviews with people from organizations involved in special materials and so on. Even went to archives of the national labs like Los Alamos or wherever and nobody had any data on this kind of construction having been made.

    “And then there was someone else she went to, and they tried to seecould they just duplicate the material. In fact, they had trouble bonding the magnesium and bismuth layers together. So, it wasn’t clear exactly how you would make it. And then finally, in talking to materials experts, they say, ‘Well, let’s just say somebody could make this. What would you use it for?’ And all the material scientists said, ‘I don’t have a clue. I can’t even imagine the reason for constructing something like this.’

    “However, what happens is, a couple of decades go by, or more, and suddenly we have our whole science of so-called metamaterials has been coming into the fore and developing kinds of stuff. And lo and behold, a paper gets published, which says, you know, if you had a bismuth layer of just this size – it happened to match what we had – and it is separated by magnesium layers of about the size that we see, this would have a very special property. This came out of a metamaterial research and not directly associated with the material.

    “So, it turns out this would make a terahertz wave guide. What’s a terahertz? Well, you hear about megahertz and gigahertz, and the microwave spectrum, and then you hear about infrared radiation. Well, terahertz sort of lies in that no man’s land as far as technology development goes. Above microwaves, above gigahertz but less than the wavelength of infrared heat.

    “So it turns out that ordinarily, when you have a wave guide and you want to send a signal from one place to another, you know, you have a pipe, for example. And the pipe generally has to be about the size of the wavelength. A half a wavelength, for example. Well in this case, there’s a frequency band around five terahertz and the wavelength is a certain size. Well it turns out, in this special kind of material, those thin bismuth layers would transmit those signals at 1/20th the size of the wavelength. And so that means , you now have sub-wavelength, waveguide effects.

    “So what that means is, if you wanna transmit a lot of data at terahertz frequencies, and usually you gotta imagine a stack of waveguides to do it with, now suddenly you’ve got this whole thing micro-sized down and you can carry out your task. And it’s only because of metamaterials being developed. No, there were no metamaterials being developed back in the days when the sample was found, that’s for sure. But anyway, so there’s a possibility this has an important effect.

    “So actually, what we see here then, and this happens in many cases, you get a material sample, unusual characteristics and you wanna evaluate it. Method of manufacture is difficult to assess or reproduce, as it was here. The purpose of function is not readily apparent. But then our own science advances on over the decades, and finally we get to a place where we can imagine a possible purpose or function comes to light, which was the case here. We still have this in a pipeline to do a lot of experiments on that haven’t done yet because we haven’t raised the funding for it. But there’s more to be done.

    ~~~

    Back to the Nolan interview…

    JM: What can you do with terahertz that we can’t with current, regular? Just pack in more information?

    GN: Well, it’s…pack in more information, faster, farther. Terahertz is the next thing for communication…that if we can get terahertz waves working efficiently, there’s a whole slew of other electronic and radio communication things that can be done, that can’t be done now.

    JM: Shouldn’t there be things that we’re doing with these materials that show what environments they can sort of withstand or what possible properties they have as well?

    GN: Yeah.

    JM: So like super-high velocity, literally like slingshotting them as fast as you can or like putting them in super-cold environments or super hot. The trans medium thing, making sure they don’t rust underwater because a lot of the UAPs seem to submerge underwater and then come out of the water. Like basic things like that, based on the observables.

    GN: I mean, you could run electricity across them, see if anything’s different. Are they conductors, are they insulators? Again, this is why I think it’s important to get this kind of information out so that even a skeptic could suggest what should be done. I mean, there’s a number of people who who have them. I get emails, occasionally, from people. Actually, there’s one that I just got in the last few weeks, an email from somebody who…it’s a glowing object that drops molten metal. I haven’t seen it yet, I’ve just seen pictures of it. But it’s interesting enough that I’m actually going to follow through on that one

    JM:  We should have some sort of standardized process.

    GN: Exactly – there’s like a flow chart that you could put together of what should be done and once you’ve got that process, things just go in one end and come out the other. And then you give it to the true believers and to the skeptics and let them fight with data. rather than hearsay.

    JMDo you think if they’re hyper-intelligent aliens, they’re aware that you’re looking into them? Presumably.

    GN: Presumably! Or they don’t care. Or they say, “There’s nothing you’re going to be able to figure out about this, so we don’t care.” Or it’s left behind almost as a, “you figure it out and you can have it.” You know, the breadcrumb trail. 

    JM: That’s what it feels like.

    GN: Yeah.

    JM: Last question for you: Roswell 1947. Trinity, that crash was 1945

    GN: Mm-hmm.

    JM: …where the sort of larger piece came from. Do you think that aliens are possibly interested in us splitting the atom? So, do you think they’re interested in not only nuclear power and our own possibly destruction of ourselves, but figuring out the building blocks of base-layer reality. And so when we split the atom they become more interested and then maybe they’re interested in the fact that we’re figuring out our own genetic building blocks as well?

    GN: Well, I mean, from their perspective – let’s say a million years ahead – they know that we’re maybe a few hundred years from spreading in our local galactic arm, even if only by conventional craft. Do they want a bunch of angry monkeys running around with bombs? They probably want to keep tabs on us. I would. I mean, if I’ve got a neighbor who is bristling with armaments and always cursing and throwing stuff around, I might want to keep tabs on them. And we’re a bunch of angry monkeys right now.

    JMDr Nolan, appreciate it, this was awesome.

    GN: Thank you.

    JM: And I hope we make real progress, I hope we go through the step sequence of science that it takes to figure out what the hell these things are and what they do.

    ~~~

    If you like what you see on my blog, my Twitter and YouTube Channel and appreciate the time and effort, here’s my Patreon, Pay Pal and Venmo.

    ~~~

    Patreon = https://www.patreon.com/ufojoe

    PayPal – ufojoe11@aol.com

    Venmo – www.venmo.com/u/ufojoe


    🛸 Recommended Intelligence Resource

    As UAP researchers and tech enthusiasts, we’re always seeking tools and resources to enhance our investigations and stay ahead of emerging technologies. Check out this resource that fellow researchers have found valuable.

    → Surfshark