Scientech

iSpin into Nova-Paradigm

[An interesting piece by W. Patrick McCray]

In 1988, Albert Fert and Peter Grünberg independently discovered that the electrical resistance of structures made of alternating layers of magnetic and non-magnetic metals can change by an unexpectedly large amount in the presence of an applied magnetic field. Within a decade, this seemingly esoteric observation had revolutionized the electronics industry by allowing hard drives to store everincreasing amounts of data. And when Fert and Grünberg shared the Nobel Prize in Physics in 2007 for the discovery of giant magnetoresistance (GMR), the Royal Swedish Academy of Sciences announced that “GMR technology may also be regarded as one of the first major applications of the nanotechnology that is now so popular in a very diverse range of fields.”. However, the discovery of GMR is interesting for reasons beyond its ‘nano-ness’. The story of GMR raises a number of questions about the nature of contemporary knowledge production. Does the venerable linear model — with basic research leading to applications — apply to nanoscience and technology? Or, as some have argued, is nanotechnology an example of ‘post-academic’ science, funded by governments and corporations to solve specific problems rather than advancing knowledge for the sake of knowledge?

Mj

Discovery and Commercialization

Magnetoresistance, a change in the electrical resistance of a conductor caused by an applied magnetic field, was first observed by the physicist William Thomson (Lord Kelvin) in 1857, wheras the physics underlying electron spin — which is the ultimate source of magnetism in most materials — dates back to the work of Paul Dirac, Wolfgang Pauli and others in the golden era of quantum mechanics. The effect was quite small, typically a few percent, but it was still large enough to be exploited in read heads for magnetic disks and sensors for detecting magnetic fields. However, that all changed with the discovery of GMR in 1988. Grünberg and his team at the Jülich Research Center in Germany made their discovery — a change of about 10% in electrical resistance in the presence of a magnetic field — in a structure containing a 1-nanometre thick layer of chromium (which is not magnetic) sandwiched between two thicker layers of iron (which is magnetic). Meanwhile Fert and co-workers at the University of Paris-Sud and Thomson CSF observed an even larger effect (a change of about 50%) in more complex structures containing about 60 alternating layers of chromium and iron. Both teams used molecular beam epitaxy (MBE), a central if often-overlooked research tool in the history of nanotechnology , to make their multilayer samples. Although the French team coined the term ‘giant magnetoresistance’, it was Grünberg who recognized that this effect could be used to detect faint magnetic fields, so he filed for a patent as his group wrote up its results. However, the two group leaders agreed to share the credit for the discovery. GMR also represented the first example of a new kind of technology called ‘spintronics’, so-called because it exploits the spin of the electron, as well as its electric charge, to store and process information. Engineers first used GMR in niche applications, such as sensors to detect very weak magnetic fields, but other companies were eager to apply it in bigger and more lucrative markets.

pc-pro-optical-disc-490x401

In particular, Stuart Parkin and colleagues at IBM’s Almaden laboratory near San Jose exploited GMR to make read heads that allowed magnetic disc drives to become smaller while holding eight times more data than before, an achievement that was reported on the front page of The Wall Street Journal in November 1997. A central part of Parkin’s work was to show that GMR devices could be produced by ‘sputtering’, which was much faster than MBE, and some observers wondered why he did not share the Nobel Prize with Fert and Grünberg. These developments helped set the stage for the subsequent explosion in computer memory that, in turn, helped make it possible to store gigabytes of music, photos, videos and so forth on iPods and other portable gadgets. These innovations prompted a member of the Nobel physics committee to comment that “you would not have an iPod without this [GMR] effect.” . IBM’s GMR-based innovation brought the broader field of spintronics to a market worth billions of dollars per year (although the first iPods — introduced by Apple in 2001 — actually used GMR-based hard drives made by Toshiba.)

Seizing an opportunity

As actual products exploiting the GMR effect appeared on the market and more scientists began to do research in spintronics15, science managers from military laboratories and funding agencies began to take notice. The Defense Advanced Research Projects Agency (DARPA) was one of GMR’s first champions. Founded in the wake of Sputnik, DARPA had a reputation among scientists as lean, agile and able to direct considerable resources to highrisk, high-payoff technologies. During the 1990s, DARPA invested millions of dollars into university-based spintronics research, funding fundamental and applied projects. Stuart A. Wolf, a physicist at the Naval Research Laboratory in Washington, DC, was the main champion for spintronics and other GMR-based research programmes at DARPA. One technology he was especially interested in was magnetic random-access memory (MRAM). Like GMR-enabled disk drives, MRAM is based on metallic materials, not semiconductors. And because they store data using magnetic storage elements instead of electrical charge, MRAM devices have the potential advantage of retaining information even after a computer is switched off, unlike regular random-access memory. MRAM devices would also be less vulnerable to radiation damage, which appealed to DARPA for space-based applications Wolf sold his programme by lugging in a memory component pulled from a satellite system into the DARPA offices in 1995. “I plopped it on the director’s desk,” Wolf told me in an interview in 2006. “It weighed forty pounds and cost a quarter of a million dollars. I said, I’m going to replace this with a fifty-cent chip.” Over the next decade, DARPA provided tens of millions of dollars to support a modestly sized international research community made up of university-based researchers, scientists from government laboratories and representatives from the electronics industry, most of whom attended an annual DARPA meeting on spintronics. This community included physicists (both theoretical and experimental), materials scientists, chemists and engineers The interest and support that DARPA and companies like IBM gave to researchers interested in spintronics coincided with a larger movement underway in the United States and other countries to generate political support for a broader researchand-development effort in nanotechnology.

 nanotechnology

Advocates of national policies to support nanotechnology used the economic importance of nanoelectronics, and the commercial success of spintronics in particular, to support this broader agenda. In the US, for instance, nanoelectronics figured prominently in the discussions between scientists and funding agencies that led to the establishment of the National Nanotechnology Initiative (NNI). In 1997 and 1998, the National Science Foundation (NSF) organized studies to evaluate possible opportunities in nanotechnology, and a standard case for supporting nanotechnology started to emerge: some time in the next ten-tofifteen years the semiconductor industry will encounter serious technical problems in its effort to improve the performance of devices by reducing their size. The path to a replacement technology was unknown, and without investment in new technologies for the computer and semiconductor industries, economic competitiveness could suffer. Scientists, engineers and policy makers in the United States all interpreted the commercialization of GMR as a sign that investment in nanotechnology was both sensible and prescient. Consequently, nanoelectronics — along with novel materials and new technologies for health, energy and biological applications — became a “priority research area” in the initial formulation of the NNI. Research initiatives in Europe and Asia took similar paths. The resulting flood of money for research transformed nanotechnology into one of the most robustly funded, aggressively pursued and widely promoted research areas in modern science and engineering.

Reflections

Debates about the nature of nanoscience, with its emphasis on applications rather than discovery, have suggested that it is one of the first fully realized examples of post-academic science6 . However, others cite GMR-based hard drives as an example of the commercial benefits that follow from support for basic research and of the importance of funding the best science and scientists rather than research that is expected to deliver economic returns4 . In the most basic form of this socalled ‘linear model’ of research (presented most famously in a report called Science, The Endless Frontier 3 that was prepared for the US president by Vannevar Bush in 1945) there is a direct path from scientific discovery to application.

Historians, of course, recognize that ‘pure science’ is very much a social construction and one that often, after closer scrutiny, may not be quite so unfettered as it seems at first. However, as recent sociological and historical studies of the nano-enterprise continue to show, new ways of producing knowledge are emerging that may represent breaks from traditional modes of discovery driven academic research. It is often argued, for example, that the rise of nanoscience and technology will lead to more multidisciplinary research, even to the point of involving researchers from the social sciences and the humanities. The history of spintronics reflects this realignment. During the Cold War, for example, the military nurtured research in materials science and solidstate physics, areas that would later be central to the development of spintronics. After the collapse of the Soviet Union, military agencies continued to foster new scientific fields, albeit sometimes through new alliances with industry or hybrid government–university–industry programmes.

Government-based grants officers at DARPA and other agencies acted as ‘institutional entrepreneurs’ and built programmes that melded military funding with corporate investment and goals. To a first approximation, the case of spintronics appears to lend credence to the traditional linear model that posits science as a prime mover for technological applications. The real story, of course, was much more complex, revealing the interplay between basic science, instrumentation, federal policy, industrial research and perceived commercial goals. One cannot help but conclude that the ‘basic’ linear model, even if applicable, is anything but simple when examined closely enough. Fert and Grünberg originally discovered GMR in the tradition of small-scale basic physics research. Businesses, large and small, swiftly patented and integrated it into products worth billions of dollars in annual sales and a new scientific community emerged around it. And by studying the history of GMR, we can discern connections between contemporary scientific research and engineering applications, and also gain some insight into the boundaries and shifting relations between science and technology.

To Spae, as Spinoza?

 

The idea of God and a higher form/energy has probably intrigued every single person at some point of his/her life. It may have arisen due to existential questions and to rationalize seemingly absurd trappings of life, or it may have arisen, as in the case of a number of theoretical physicists, by trying to understand the order and symmetry of our Cosmos. Einstein was one of the key figures from the twentieth century scientific community who discussed his ideas on God at length. The interesting part in this arises from the essential grounding of his ideas in Spinozism.

So what is Spinozism?

Spinozism is a philosophical system of ideas put forth by Baruch Spinoza, a 17th century Dutch philosopher, who defined “God” as a singular self-reliant entity, with matter and thought being attributes of this form.

spinoza

Spinoza

According to Spinoza, our universe is a mode associated with the two attributes ofThought and Extension. While the former term is self-explanatory, one may mention a little about what the latter refers to. Metaphysics says that extension can be thought of in terms of the property of “taking up space“. Descartes defines extension as the property of existing in more than one dimension. For Descartes, the primary characteristic of matter is extension, just as the primary characteristic of mind isconsciousness. Going back to Spinoza’s views, he believed that God has infinitely manyother attributes which are not present in our physical world.

Spinoza’s words “Deus sive Natura” (God or Nature) highlight this aspect beautifully. For him, God is a dynamic nature in action, evolving and changing. There are two key points to be mentioned here: firstly, even God under the attributes of thought and extensioncannot be identified strictly with our world since these attributes form just a subset of God’s (note the conspicuous dissociation of God from a ‘Him’ or ‘Her’) infinitely many attributes. Secondly, Spinoza insists that one cannot conceive any attribute of a substance that, in itself, leads to the division of that substance, and that “a substance which is absolutely infinite is indivisible” (Ethics, Part I). So, Spinoza showed, by his arguments, that the world is a subset of God. So the world is essentially in and made ofthe all-pervading entity known as God, and just as the Pantheists posit, one does not seemingly have a distinct anthropomorphic (the property of attributing human form or other characteristics to a thing other than a human being) God. Much like the idea of Krishna consciousness, put forth in Hinduism, or the principles of Taoism, I would say.  This idea was a bone of contention for quite some time, with the Spinozists being labelled as Heretics, evidently because it went against certain ideas and concepts of divinity as laid down in certain religions. For Spinoza, all that exists share a common unity, all that happens has a certain regularity and one has the distinct ideas of the spirit and nature, which can be described in terms of the attributes of God.

So where does Einstein come into all this?

Well, for starters, Einstein did say the famous words:

“I believe in Spinoza’s God, who reveals Himself in the lawful harmony of the world, not in a God who concerns Himself with the fate and the doings of mankind…”

Einstein’s admiration for Spinoza is clearly visible in his letter to Dr. Dagobert Runes, philosopher and founder of the Philosophical Library, where he posits his ideas on the ethical significance of Spinoza’s philosophy. Here I’ll quote a small section of that letter

“I do not have the professional knowledge to write a scholarly article about Spinoza. But what I think about this man I can express in a few words. Spinoza was the first to apply with strict consistency the idea of an all-pervasive determinism to human thought, feeling, and action. In my opinion, his point of view has not gained general acceptance by all those striving for clarity and logical rigor only because it requires not only consistency of thought, but also unusual integrity, magnamity, and — modesty.”

In fact, Einstein even wrote a poem for Spinoza, a portion of which is

How much do I love that noble man
More than I could tell with words
I fear though he’ll remain alone
With a holy halo of his own.

Einstein, much like the followers of Spinoza, saw God in the order and ‘lawfulness’ of all that exists. Sigmund Freud famously contested the very idea of ‘God’ and believed that ‘God’ was just an illusion, borne out of the need for a father figure and the central pole of religions, which according to him, were created to help mankind restrain the violent impulses of man during the development of civilization. Einstein, however, draws clear from following this line. He felt that such a belief seemingly put forth the lack of any transcendental outlook of life.

Einstein mentions, in one of his letters to a Talmudic scholar, that the idea of a ‘personal God’ was primarily an anthropomorphic concept, which could not be taken seriously. He found too constraining a view and conception of God; one which was centered around the proverbial human sphere. For Einstein, “admiration for the beauty of and belief in the logical simplicity of the order and harmony which we can grasp humbly and only imperfectly” was the key to understanding what God could be.

Einstein, like Spinoza, never sought a traditional ‘God’ or felt the need for moral instruction from religious orders, even though Spinoza was from a line of ‘Conversos’ (those who had been converted forcefully due to the Portugese Inquisition) who were extremely proud of their Jewish identity and order (they re-converted enmasse after the Decree of Toleration, 1579 was passed by the Union of Utrecht). According to Einstein, “[T]here is nothing divine about morality. It is a purely human affair.” Even though neither followed any religious order per se, neither man could imagine a universe completely devoid of a higher power. As per Einstein, the puniness of man which he realizes upon observing the subtle nuances and elegance of the laws of nature, of evolution, gives us a rationale for appreciating a higher order and symmetry in God.

Einstein also believed that onceour scientific understanding has reached the most fundamental level, the laws will explain themselves. One would not need external constraints or extra variables to explain any portion of the theory. Conspicuous signs of a singular, self-subsisting God-liness. This belief was instrumental in Einstein’s drive, right up to the end of his life, for finding a unified theory—a “Theory of Everything” that could unambiguously reveal God’s hand in the world around us. He viewed the quest for science as a form of devotion, and since this is a belief that I strongly align with, I rest my case with the following words by Spinoza, from Ethics Part I Proposition XIV, but with the open invitation to not accept what others have put forth to describe God but to keep trudging along the path of self-realization to truly attest or negate what Einstein and Spinoza so strongly believed in.

Except God, no substance can be or be conceived.

Time Two Too?

Many of us may have wondered about the famous ‘Arrow of Time’ problem. Why is it that time always moves forward? Most of these are centered around the idea of an increase in entropy. But have you ever wondered what would be the case if our universe had two time axes instead of just one? Can it be possible that just as higher dimensional space-coordinates are ‘curled up’ in the Universe, we could also have similar multiple time-components? More importantly, can you even visualize what the world would be like with even two time axes?

Going by the technical definition, special relativity describes spacetime as a manifold whose metric tensor has a negative eigenvalue. This corresponds to the existence of a ‘time-like’ component. A metric with multiple negative eigenvalues would correspondingly imply several such components. The theoretical framework of physics has evolved in leaps and bounds since the time of Newton, but one conspicuous invariant is the one-dimensionality of time. While time and space have been amalgamated into what we now know as spacetime time has remained one-dimensional. In Physics, theories with more than one dimension of time have sometimes been put forth, such as Itzhak Bars’s work on “two-time physics”, inspired by the SO(10,2) symmetry of the extended super-symmetry structure of M-theory, which is a recent, exciting development of the concept.

Steven Weinstein puts the need for an informed choice succinctly and beautifully: “It’s not at all clear we can be confident that our world has a single time dimension unless we know what a world with multiple times looks like.” In common parlance, you don’t know what you don’t have until you know what that is. Weinstein, in his paper titled “Multiple Time Dimensions” [1], goes on to highlight the addition of a constraint on the observations of natural phenomenon to give stable solutions. He also obtains a negative sign in the equations of motion, when one takes one of the two time-coordinates and treat it as a space-coordinate instead. The two interesting results mentioned give rise to the two interesting possibilities:

1.  The world may have begun a finite time ago, as a singularity may lie in the past, or the end of the world may be nigh, as a singularity may lie in the future.

2. The additional, nonlocal constraint is a feature of the laws of nature, a feature which guarantees that the evolution is non-singular, having no beginning or end.

Issues such as whether the additional constraints obtained due to adding a time-dimension should be taken as meaningful additions to the laws of nature are debatable.

time travel

Time travel’s too mainstream; no?

If one thinks about it, one can argue on the idea of parallel universes, which has been actively followed both in the sciences and in popular culture. All the what-if situations have always been interesting topics of discussion. What-if a Jack from Nottingham could go back in time and in a heroic display stalled the Great Heathen Army in its tracks and changed the history books forever? What-if Joe went back in time and created an unnecessary amount of nuisance to stop his grandparents from ever getting married? Amusing and interesting! If we do end up accepting a multiple time hypothesis, we can always say that during those critical points in time, time just ‘branched out’. In short, we were led into a parallel universe which followed on from the new status quo of that point in time. So maybe, the Mercians would still be ruling Nottingham and Joe would just fade away into nothingness! The whole parallel universe theory takes a new ‘dimension’, literally and figuratively.

Philosophers G. C. Goddu and Jack W. Meiland proposed formal models of multi-dimensional time. In the Goddu model, one has two times – the normal time and hyper time. Hyper time progresses while normal time can ‘skip around’. They track together through the creation of universe, the Battle of Trafalgar and the landing at Plymouth together, unless some Yankee kid McFlies into a time machine. But in case, he does, say to England under Oliver Cromwell in the 1650s, then hyper time continues as it is, but normal time goes back to the 17th century. So, on the normal-time line, one would have 1980s, 1990s, 2000s, 2010s, 1650s, 1660s and so on. The subtle point here is that there were two hyper-times for the same normal-time. Let me explain:

Say we have hyper-time tagged by T1, T2, T3, T4, T5, … and normal times by N1, N2, N3, N4,…

mcfly

Cromwell!

Let both track together since the beginning of time. Let N1 be 1653 when Cromwell came to power (T1), N2 be some time in the 19th century, say when the Stockton and Darlington Railway opened (T2), N3 be when McFly was born (T4), N5 be when McFly went back in time, say in 2014 (T5), N6 be when McFly had tea and crumpets with Cromwell in 1653 (T6). Here one must realize that N6 (1653) < N5 (2014) but T6 is still greater than T5. Also, there are two values of T for the same value N = 1653. Hence, there is no paradox of McFly being there as well as not being there in 1653. The paradox arises when McFly is there as well as not there at the same hyper-time/normal-time pair.

The Meiland model is a little more involved, and I would encourage the readers to read about it as well. In case, we have a continued post for this topic, given the interest among the readers, we may add that. The ‘wormhole’ of multiple time dimensions is very much a topic of interest among some members of the scientific community as well as philosophers, and is all open to discussion, analysis and debate.

Jump right in!

 

Kamiokande Kamikaze

 

The Super-Kamiokande is a neutrino observatory in Hida (Japan), which as designed to search for studying solar and atmospheric neutrinos, besides other research pursuits such as probing for supernovae in the Milky Way Galaxy. It is located 1 km underground in the Mozumi Mine, Hida. It consists of a cylindrical stainless steel tank that is 41.4 m and 39.3 m in diameter, and holds ultra-pure water (UPW). There is an inner detector region (33.8 m in diameter and 36.2 m in height) and the outer detector which consists of the remaining tank volume. Mounted on this superstructure, there are 11,146 photo-multiplier tubes (PMT), which are 50 cm in diameter, that face the inner detector and 1,885 20 cm PMTs that face the outer detector. These tubes detect the faint flashes of light known as Cerenkov radiation given off by electron- and muon-neutrinos when they interact with electrons in the water.

Interesting as it sounds, recently the Super-Kamiokande observatory showed us how the smallest of lapses in maintaining optimal conditions for the maintenance and operation of experimental apparatus can lead to catastrophic results!

Super Kamiokande 2

CreditSuper-Kamiokande Gallery

On November 12, 2001, the observatory suffered a terrible accident in which a chain reaction of failures destroyed 6,600 of the photo-multiplier tubes! The tank was being refilled with water after some of its burned-out tubes had been replaced. The subsequent investigation show that workmen standing on styrofoam pads on top of some of the bottom PMT tubes must have caused small fractures in the neck of one of the tubes, leading to an implosion of that tube. That implosion caused a chain reaction or cascading failure, as the shock wave from the concussion of each imploding tube cracked its neighbors throughout the tank. By Pascal’s principle, pressure is transmitted undiminished in an enclosed static fluid, and this worked against the management, transmitting the pressure pulse from the implosion to other tubes submerged in the water. To give you an idea about how much it cost them to put the facility back in working condition, each of the PMT tubes cost $3000! Doing quick math, this led to an incurred loss of about GBP 12.5 Million!

The detector was partially restored by redistributing the PMT tubes which did not implode. Also, protective acrylic shells were added to prevent any future chain reactions of this kind.

Body Painting as a Medical Tool

Body painting is seen as a creative and fun-filled activity. What some of us often overlook or have not heard of is the use of body painting as a medical tool. In a study conducted, students felt that body-painting aids their retention of the anatomical knowledge acquired. Sensory factors, such as visual stimuli, and the tangible nature of the activity help in retention and recall of medical concepts such as human anatomy.

In a paper published in 2009 in Anatomical Sciences Education, titled “A Qualitative Study of Student Responses to Body Painting”, the results of the study were put forward. 133 medical students participated in 24 focus groups over the period 2007-2009 in Durham University. These groups were conducted to see whether medical students found body painting anatomical structures to be an educationally beneficial.

Five principal results arose:

1. Body painting is a fun learning activity (quite obvious, ain’t it?)

2. Body painting helps in the retention of knowledge: Memorability of body painting arises due to a number of reasons. Often students would use a body painting on their body as a visual reminder for an entire day, after a session of body painting. That should show how effective the tool was for the focus groups in the study.

>> Body painting is a tangible process, of course. Students found that the sensation of being painted aids their memory.

>> Color appeared to have had the most significant effect on students, in helping them retain what was taught and then recall the same.

>> Even those who painted found the activity useful, and more so in some cases. The act of painting was an interactive one, wherein one had to relate and associate theoretical ideas to actual body structures. In this activity, the one who was painted had the instructions for the painting.

3. The act removed students out of their comfort zones, in terms of body images and vulnerability. So issues like not going to the gym more often (Ha ha) emerged in the comments and feedback. But it also showed issues that directly relate to patients in any future practices taken up by the students. The very same removal of patients from their comfort zones is what doctors have to tackle later. Essentially, this tool of body painting helped shape their behavior and attitudes for the future.

Body painting was found to be a useful alternative tool for clinical skills teaching. And of course the fun involved added to the positive environment associated with the teaching of medicine. Let’s see if this interesting tool is taken up by more medical practitioners and schools of medicine for training purposes.

Shown below are some pictures (Credit: Danny Quirk Artwork) that highlight the use of body-painting for studying human anatomy.

body paint 2

 

body paint

 

 

On the ‘Kelvin Scale’ of the Earth’s Age

 

Lord Kelvin famously calculated the age of the earth from Physics principles and stuck to his result for half a century, even against vehement and near-unanimous opposition from other geologists at that time, who believed that the Earth was a lot older than what Lord Kelvin suggested. After radioactivity was discovered by Henri Becquerel in 1896, many believed that the reason Lord Kelvin was misplaced in his belief in his results and the Earth was a lot older than he had predicated was because he had not included radioactivity in his calculations. Radioactivity was a source of heat that Kelvin had not included since it was discovered after his famous model was put forth. However, it was later seen that even after including this source of heat, the age of the Earth still was lesser than expected.

So where were the Geophysicists and Geologists going wrong?

Interestingly, in 1895, before radioactivity was founded, John Perry had showed thatconvection in the Earth’s interior would negate Lord Kelvin’s calculations and result, since Lord Kelvin had only taken conduction to be the mode of heat transfer in the Earth, and had not considered convection at all.

Conduction Convection

 

(Source: Banque de Schemas – SVT

Unfortunately, Perry’s analysis was neglected during those initial days of the debate in the field of Geophysics. In those days, the mathematical brilliance and force of Lord Kelvin’s work awed the contemporary community of Geologists, even though they were completely against Kelvin’s arguments and result. So, the next time you try to derive a solution in geodynamics, remember that it is often more than just mathematical calculations!

Fourier laid the plinth for what would later be built upon as an edifice of mathematical analysis of the flow of heat in his treatise Théorie Analytique de la Chaleur in 1822, and he made arguments that the highlighted that the Earth must be cooling.  Kelvin first wrote extensively on heat, building upon some of Fourier’s mathematics. He addressed the question of the age of the Earth in 1844 when he showed that if we were to assume that the Earth is a solid body cooling from an initially high temperature, measurement of the rate of heat loss from its surface would place bounds on what its age is. Kelvin took the Earth to have been in an original molten state with uniform temperature, from which it cooled and solidified over time.

The key assumption of Kelvin’s model was that energy was conserved. Kelvin also assumed that the Earth was rigid and its physical properties were homogeneous. Most importantly, he assumed that there was no undiscovered source of energy, both for the case of the Earth as well as for the Sun. One would later see that this was not the case.

Under the given theoretical considerations, the depth varies with depth (z) and with time (t). Solving the diffusion equation for a solid, in one spatial dimension and in the absence of heat sources, given by Fourier,

Equation 3

 

where T refers to the temperature and K refers to the thermal diffusivity. The solution to this equation is found to be of the form,

Equation 1

 

erf is the Error Function, for which more information can be found here. The gradient of the temperature is found to be given by the relation

Equation 2

 

The temperature and gradient solutions constitute the solution for the cooling of the oceanic lithosphere when it is treated as a half-space. These solutions show us that at time t, the average distance that heat can diffuse is roughly Equation 4 and at that time t, material at a depth greater than this will still remain at the original temperature. One can use the gradient solution to find the age of the Earth by inverting it.

Equation 5

 

When Kelvin first made his arguments, geothermal data were not available to him. When he returned to this problem after a gap of 15 years, geothermal gradients had been measured in several parts of the world. He chose a mean gradient of 1/50 degree Fahrenheit per foot. Taking the original temperature to be 7000 degree Fahrenheit and with an estimate of thermal diffusivity to be roughly Equation 6, this gradient yieled an age of 96 Million Years. Lord Kelvin gave a margin of error, to account for uncertainties in thermal gradient and thermal conductivity (lower bound of 24 Million Years and an upper bound of 400 Million Years).

Interestingly, at that time, geologists stuck by the idea of ‘unlimited age’ that allowed them to explain any phenomenon not by the laws of Physics but by “reckless drafts on the bank of time” (Chamberlin, 1899). Lord Kelvin despised this approach. He once had a conversation, in 1867, with the geologist Andrew Ramsay, when they had been listening to Archibald Geikie discussing

Equation 7

But this attitude and understanding slowly changed, and even before radioactivity was discovered Geologists had come to discover that the Earth had a finite age, and that the pursuit of determining the age of the Earth by quantitative reasoning was an important one.

Among the assumptions in Kelvin’s model, it was initially believed that the third assumption was flawed i.e. the contribution of radioactivity was significant and could account for the difference between the age of the Earth predicted by the model and the expected age of the Earth. However, it was Lord Kelvin’s colleague, John Perry who was probably the only individual at that time who believed that it was rather because of the non-inclusion of the effect of convection as a mode of heat flow that led to the flaw in Lord Kelvin’s calculations and result. Perry is famously quoted to have said that “…it is hopeless to expect that Lord Kelvin should have made an error in calculation.” So, as other geologists, Perry concentrated on verifying the validity of Kelvin’s assumptions. In Lord Kelvin’s model, the present supply of heat to the surface of the Earth is derived from the cooling of a shallow outer layer of thickness Equation 4. But what if the thermal conductivity inside the Earth were much higher than at the surface? Then the deep interior would also cool, giving us a large reservoir of energy to maintain the surface heat flux. Considering this, Kelvin’s calculated age would be significantly lower than the one obtained after considering this effect.

Perry had two reasons for putting forth this idea of higher thermal conductivity in the Earth’s surface:

1.  Experimental evidence showed a slight increase in conductivity of rocks with temperature.

2. The Earth’s increase in density with depth implies a greater proportion of materials (like Iron) that conduct heat better than do silicates and other materials found near the surface.

More radically, Perry argued that convection in the partly fluid interior of the Earth would transfer heat much more effectively than would conduction.  In his words, “…much internal fluidity would practically mean infinite conductivity for our purpose.” However, he was unable to calculate the role of convection in heat transfer completely, and approximated its effect by a high ‘quasi-conductivity’ in the Earth’s interior. Perry (with Oliver Heaviside) modified Kelvin’s calculation for the case of large, but finite, interior conductivity. The key point was to change the assumption that the earth’s surface has a certain conductivity and the interior is a perfect conductor. and showed that the Earth’s heat flux, presently, is consistent with an age of gigayears (provided that the conducting outer surface is a few tens of kilometers thick and the effective (or “quasi-”) conductivity of the interior is ~100 times greater than that of the ‘lid’.

Unfortunately, Perry’s argument was not widely accepted at that time, partly because of the eminence of Lord Kelvin in the scientific world and partly because his ideas were not widely understood by the community of geologists who often shied away from mathematics!

This debate remains and is yet to be resolved conclusively: an opportunity for our readers, students of science surely!

Baryon Asymmetry

In nature, one has the curious, intriguing idea of Baryon Asymmetry.

In simpler terms, one has more matter than antimatter in our universe. For those of you who may still find this just chrientific jargon, antimatter is made of antiparticle which have the same mass as of ordinary matter but have opposite charge and other particle properties. When particles and antiparticles collide, they annihilate, giving rise to photons, neutrinos and lower-mass particle–antiparticle pairs.

It is good that we have more matter than antimatter or else our world and universe, as we see it today, wouldn’t have existed. Everything would have just collapsed into a confusion of energy and photons and neutrinos, after the initial matter and antimatter, formed after the Big Bang, had annihilated! But why do we have this asymmetry?

Neither the Standard Model, nor Einstein’s theory of General Relativity has been able to provide us with an obvious explanation for why this should be so. Since we did not have the complete annihilation of everything, and we can have our favorite coffee with a Tom and a Tracy, not thinking about what if an anti-Tom or an anti-Tracy walks in, we postulate that it is likely that this asymmetry took place due to physical laws having worked differently for matter and antimatter in the first few seconds after the Big Bang.

One of the explanations for this is in terms of Charge-Parity Violation (CP Violation). To cut out the long, boring explanation, let us just look at an interesting presentation that is available on the Nobel Prize website (Click HERE). The Nobel Prize in Physics 2008 was divided, one half awarded to Yoichiro Nambu “for the discovery of the mechanism of spontaneous broken symmetry in subatomic physics”, the other half jointly to Makoto Kobayashi and Toshihide Maskawa “for the discovery of the origin of the broken symmetry which predicts the existence of at least three families of quarks in nature”.

It is still an Open Question (OpQuest) for the scientific community, since no theoretical consensus has been reached for the same. For all the motivated (and not-so motivated) students of science, this one should surely catch your attention!

Miscellany

Tamm and Dirac

I recently found a compilation brought out by CERN of the letters that Paul Dirac and Igor Tamm shared roughly between 1930 and 1933. Enjoy!

http://cds.cern.ch/record/258359/files/P00020744.pdf

a Border Territory Research Paper#1:

Analysis of Stress Coupled Magneto Electric Effect in BaTiO3 CoFe2O4 Composites using Raman Spectroscopy

a

a

Radiation From Accelerated Charge

(Relativity Notes)

G. F. Smoot, Department of Physics, University of California

Click HERE.

*

* * *

*

Radiation from a Charge in a Gravitational Field

by Amos Harpaz & Noam Soker

[Arxiv Paper Link]

Click HERE.

* * *

Something about Kinetic Theory, Specific Heat and Perturbations…

Idealized plot of the molar specific heat of a diatomic gas against temperature. It agrees with the value (7/2)R predicted by equipartition at high temperatures, but decreases to (5/2)R and then (3/2)R at lower temperatures, as the vibrational and rotational modes of motion are “frozen out”. the key point is that the kinetic energy is quadratic in the velocity. The equipartition theorem shows that in thermal equilibrium, any degree of freedom (such as a component of the position or velocity of a particle) which appears only quadratically in the energy has an average energy of 12kBT and therefore contributes 12kB to the system’s heat capacity. Courtesy: Jimmy Wales’ Omniscient Beast

* * *

International Journal of Scientific and Engineering Research (IJSER)

Finally the awaited result of the peer review for a paper I had written on my summer work at USIC Lab, DU under Dr. K. Sreenivas and submitted for publication in the International Journal of Scientific and Engineering Research was shared with me by the Editor (IJSER)….and…IT HAS BEEN ACCEPTED! They say, it is ‘in press’ for the November IJSER (Volume 3, Issue 11) journal. So, am jubilant. But do not expect me to justify this selection…I will either end up glorifying myself with a biased account or put up a forced show of humility most probably…! So…the title of the paper is:

Analysis of Stress-Coupled Magneto-Electric Effect in BaTiO3-CoFe2O4 Composites using Raman Spectroscopy

For now, that’s about it! Do read the paper (if not for the words, do go through the paper for some colorful graphs Snigdh calls ‘Picasso’s pieces’; though there’s not a trace of Cubism in them!). Will put up the link soon after it is published…the print copy is pretty expensive (70 Dollars per copy)…so cannot assure you one of those…!

* * *

Entangled in Qubits

Selection for Project under Dr. P. Panigrahi through NIUS Programme Was selected for the project “Geometric Measure of Entangled States and their Applications” under Dr. Prasanta Panigrahi of IISER-K. The selection was on the basis of the NIUS test in the Quantum Physics test organized in Physics Camp (9.1) 2012. The topic and this project was my primary preference and thankfully, rather unexpectedly given other deserving candidates, I received the nod. Rahul Biswas of ISI, Kolkata will work with me on the project, which will be worked on till Dec 2013, subject to assessment and further continuation under the NIUS scheme. Hope to cherish the experience and thank God for giving me this opportunity!

* * *

Summer Experience 2012

Had three phases in my summer vacations this year and a fairly hectic time therefore, albeit at my own call. The summer experiences were enlightening and enriching in various ways, and will surely help me in my day ahead as a student of science in general, and physics in particular.
Broadly, the work done can be described as:
  • Worked on the project “Stress-Coupled Magnetoelectric Effect in BaTiO3-CoFe2OComposites” under Dr. Kondepudy Sreenivasan at the University Science Instrumentation Center, University of Delhi from 18 May 2012 to 14 June 2012. The project largely related to the analysis of the stress coupling and the resultant strain developed by Raman Spectroscopy besides the study of Magnetic Properties (mainly using a Vibrating Sample Magnetometer) of the composites with varied proportion of components.
  • NIUS Physics Summer Camp 9.1 – 18 June 2012 to 30 June 2012: Annual summer school for undergraduate students in their first year of their courses. The camp comprised of lectures on topics such as Quantum Mechanics (topics such as Quantum Computation, Dynamical Systems, etc.), Particle Physics (topics such as Spontaneous Symmetry Breaking, Relic Abundance of Dark Matter, etc.) and Astronomy (topics like Astero-seismology, etc.) by renowned resource-persons such as Padma-Shri Dr. Arvind Kumar, Dr. D.P. Roy and Dr. Prashanta Panigrahi, and experimental physics lab sessions.
  • I worked on Magneto-Caloric Effect and Magnetic Field Induced Strain in Heusler Alloys under Dr. Ananthakrishnan Srinivasan at the Department of Physics, Indian Institute of Technology, Guwahati from 5 July 2012 to 20 July 2012. The project was to study the characteristic properties of Heusler Alloys using instruments such as a Differential Scanning Calorimeter and High Temperature VSM.

* * *

Boffin Xtras:

Connect QUESTION: Connect these pictures… Laws of Vibrating Strings Was tough to find sources for these ‘laws’. Have uploaded the laws for convenience. Question M.9 Question M.4 Presented my Sem 1 project on

Analysis of Orbital Resonance, Kozai Mechanism and Multiple Star Systems using V-Python

Am really glad that Prof. Vikram Vyas gave us the opportunity to carry out such an interesting application of  fundamental and applied physics research albeit at an elementary level. Feel privileged that he was there for our batch (he left College after the last Sem for some research-work). We had it in the NPLT between 10:30 a.m. – 11:25 a.m. in the Mechanics Class and between 11:25 a.m. – 12:20 a.m. in Prof. Vikram Vyas’ Office Hour. Sir appreciated our efforts and the class-response was encouraging. Will write next time about my project… Till then MjGM

Fields of Interest :

(Either as interests or wherein I have presented Projects/Presentations on)

  1. Memristive Systems and Memristance
  2. Kozai Mechanism and Multiple Star Systems
  3. Genetically Modified Food: Prospects and Problems in India
  4. Use of Organisms to detect Nutrient Deficiency Syndrome(s) in Soil
  5. Use of Mulch and Kitchen-Wastes for Production of Bio-Fertilizers
  6. Analysis of Non-Linear Equations (especially Lane – Emden Equation)
  7. Chromotherapy and its Application in Physiology
  8. Charge-Coupled Devices and Applications (Astronomy, MAPS, TEM, etc.)

 

Home

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s