following: 4
followed tags: 9
followed domains: 8
badges given: 0 of 0
hubskier for: 3456 days
And now an alternate version of the possible physics claims that it doesn't violate the 3rd law by emitting 2 out of phase photons.
I think the fact that NASA and the Chinese have measured thrust under highly controlled conditions, with null controls speaks volumes about the feasibility of the EmDrive. The first real test is when one of these ends up in a satellite and demonstrates orbital corrections without reaction mass. Then space-nerds will collectively pop the champagne as the next era of space exploration begins.
So if you watch the video all the way through, the root cause of a lot of recent crazy press about warp drives becomes evident. Think about a long technical presentation, getting misinterpreted, and then amplified through the echo chamber of modern media. So, the video of the researchers presentation covers 2 technologies. The first, the Albecurrie Drive, that warps space time allowing for really fast travel. The researcher did simulations in a computer, and set up experiments to validate the theory (in the absence of exotic matter). His conclusions were basically, that the theory is sound. One of the observations was using a "low-fidelity" test was that space-time was warped. However, he offered no comment on where the exotic matter would come from, and since the technology hinges on this discovery it remains a fiction at present. Then the presentation shifts to the Q-drive experiments. This is the drive based on pushing against the quantum vacuum using microwave resonance. There is no relation to the previous experiments. There were 2 designs tested, the first design had 2 units, one had baffling and one did not. The one without baffling was expected by the producer to produce no thrust. In testing, and mathematical simulation via the theory the NASA research felt they would both produce thrust, and both did. The second unit, the thrust matched the prediction quite closely on one test and was 1/3 of predicted on another test. All testing was done with controls as well, disabling the devices but firing everything else up. The stray EM looked like it contributed about 10% of the observed thrust. This is an amazing discovery. Now the press, took this as NASA confirmed a warp signature and showed thrust on a drive, therefore we're going to the moon in a few hours. Which is BS. Even if the Q-drive continues to prove feasible, humans aren't going to do well with much greater than 1G thrust. Probes, sure, but humans no. The criticism's I've read of the Q-drive are as follows: It was done for $50k in part time. This isn't adequate funding for a true test. Well, given that the researcher had access to multi-million dollar facilities, that are designed for testing just this thing and he's an expert in the field, I feel this criticism is a logical fallacy. It is caused by stray EM fields. The presentation addresses this, and null controls for this were performed. It violates conservation of momentum, therefore it can't work. My rebuttal is that It converts energy into momentum, so it doesn't violate conservation of energy. Conservation of momentum is about collisions between particles. To invoke conservation of momentum, ignores the context of the proposed theory. The theory is wrong in the terms used, the group velocity is not the local velocity and is based on a misuse of the theory around quantum vacuum, therefore it can't work. I think this is the most interesting criticism, because yes that is a theoretical flaw. However, the experiment observes thrust, and it's close to what's predicted by the Q-drive theory. The glaring error in group versus local velocity is bad, but the observation of thrust is repeatable. Therefore our understanding of the theory is wrong, but why? This should energize theorists. Overall, the video is very exciting in that a reaction mass free drive appears possible, which revolutionizes our approach to the space program. It will probably take 20 years before any of this is practical.
I always thought the strong magnetic fields posed huge challenges in and of themselves, eddie currents in electronics could destroy a lot of potentially critical systems, so shielding and design require a lot more work.
When I was in high school, there was a ski resort near my house (in Tennessee). Mention this today to 20 somethings and they think I'm a crazy old man. I remember a crazy old man, showing me pictures of him driving a model-T across the frozen river, and he said global warming is real--I got the proof. The idea that the Clinch could freeze over in the next decade, is so far fetched as to be laughable. I once stayed in the cabins on Hunting Island. Google Earth shows them in the ocean now. The ranger told me, that due to rising sea levels in my lifetime the cabins would be in the sea. Now today, the doom sayers of 20 years ago are proving to be true. The models were too conservative. But, as a species, the cognitive dissonance continues to grow as science is downplayed due to special interests, denial and ignorance. It won't be long and it will become obvious to many in Florida and Louisiana. I'd love for all the global warming predictions to be false, however, the personal evidence in my lifetime points to the opposite--when are we as a species going to wake up and take collective action?
I found this article to summarize how I view the problem: http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble . It's not just parapsychology, it's the way we do science today. To summarize, a negative resulting study will more likely not than get published, this has created a meta-statistical crisis. If one uses a 95% confidence interval, 1 in 20 studies will have an incorrect conclusion, so repeating studies is important. Now, if I through cut back grant funding across the country, and create a hoard of desperate professors who are cranking through studies to find something to hang their hat on the next grant, and they do 20 studies and find 1 with a positive result--they publish. You end up with a bunch on non-reproducible studies: http://www.jove.com/blog/2012/05/03/studies-show-only-10-of-published-science-articles-are-reproducible-what-is-happening Parapsychology is only the tip of a far bigger crisis of how science is currently conducted, from funding to publication. What can you do? Don't buy into pop-news sensationalist claims from a single study. Start believing in it when multiple studies report the same result, and remain skeptical.
Variance destroys the algorithm. So goto Facebook, and tag your face with random objects. Lots and lots of random objects.
I took transistor theory years ago. I remember almost nothing of the formulas, but I do remember the presentation. The engineering professor started with a quantum mechanical model, and he expanded the formula till it filled a chalk board. Then he said, notice that the dominate term is, and circled one of the terms. Then he took and erased everything but that term, and said we'll treat this as equal. He did this three times, with one of the formula expansions taking up 3 chalkboards. In the end, he said It's like Box said, and that is why we treat transistors as a linear amplifier, and it only applies across a certain range.
Then the next day we had the lab, and the challenge was to build a 2 transistor amplifier and show that it was linear across the desired range. You get the parts, and get them it all together up on a scope. It was a mess, there were odd side-bands and crazy noise points all through the spectrum, and I spent hours tuning the resistors to lock in the range. Wow, it was eye opening. Yes, one can model transistors as a linear amplifier, but it's good to know that the full model is a huge complicated noisy quantum mechanical problem that would probably take up 20 chalk boards fully expanded.all models are wrong, some are useful
The linear model of a transistor has proven very useful, but one needs to always be aware of the boundaries where it doesn't apply--for any model.
Speaking of which, if anyone is aware of how to define variance in a multivariate Gaussian using Clifford Algebra, I've been searching for such a definition. I've tried deriving it from scratch and can't quite seem to make it fit. I'd like to reformulate some multivariate statistical models into Clifford Algebra for some research I did.
I think the author is unaware of all the formal definitions of integers that exist. One can start with Hilbert's program ( https://en.wikipedia.org/wiki/Hilbert%27s_program ) as a first attempt at this problem. Then the work of Church and Curry created formal definitions in lambda constructions. But my personal favorite is the recent work in Homotopy Type Theory ( https://en.wikipedia.org/wiki/Homotopy_type_theory ), which proceeds to formally define integers with only 2 axioms, homotopy and univalence. This is in contrast to early formulations in ZF theory which (if I remember correctly) required 18 axioms. To make a blanket statement that we don't have a formal definition of integers ignores the last 100 years of foundational mathematics work. While the article is interesting in some of the questions it poses, this demonstrated ignorance on the part of the author has me categorizing it as a fluff piece. The introduction and 1st chapter of Penrose's "Road to Reality" directly tackles the questions of the article in a far more elegant manner.And what about the concept of integers in the first place? That is, where does one banana end and the next begin? While we think we know visually, we do not have a formal mathematical definition.
That's a really well written paper. Thank you for finding it. It uses physics, to describe a mathematical model, that is then fit via statistics to describe solar activity of sunspots. The model is then predictive for future activity. The technique and science in the paper is solid, I enjoyed reading it.
The end of the article mentions vector dot and curl operations lacking elegance. I've wondered for years why Clifford Algebra doesn't catch on in it's place.