This book mainly includes fiber optics and laser techniques, digital electronics, nuclear fusion, wave optics and wave theory of light and also the lasers including types and applications of lasers and more. This book also included detailed review of logic gates and Boolean algebra with the digital electronics.This is not only a text book for reference and readings but also have many examples with solutions and exercise sets for self-study and practice. This text can be used for any test related to the fundamentals and modern applications of physics and also for the engineering students for their semester examinations. This text is capable of solving students’ doubts and also helps to take special interests in academic project level as well as general studies.
Fundamental physics is combined with modern applications, engineering skills and problem solving. Mainly the engineering physics is published for the electronics and communication engineering students, applied science or applied electronics students for projects and research purpose in addition to their studies. This text is also a good reference for teachers as well as entrepreneurial engineering.
Main features of the text are discussed below
Designed for students and practicing engineers as a text or reference.
It includes detailed information on modern theory and industrial applications.
More topics such as fiber optics, laser techniques, holograms, digital electronics, nuclear fusion, wave optics and optical theories and more.
Detailed review of logic gates and Boolean algebra.
Practice questions, more exercise with solutions.
Wave and Particle Duality of Radiation 2
Wave Particle Duality of Matter 3
De Broglie’s Hypothesis 4
De Broglie Wavelength 5
De Broglie’s Wavelength Associated with Electrons 6
Properties of Matter Waves 6
Experimental Verification of De Broglie Hypothesis 8
The Development of Quantum Theory 12
Wave Packet 13
Schrodinger Wave Mechanics 14
The Motion Equation of Matter Waves 14
Wave Velocity and Group Velocity 17
The Uncertainty Principle 24
Experimental Illustrations of the Uncertainty Principle 26
Interpretation of the Uncertainty Principle 29
X Ray Spectrum 30
X Ray Absorption and Absorption Coefficient 35
Moseley’s Law 37
Interaction of X Rays with Matter 39
X Ray Diffraction 43
Bragg’s Law 44
Compton’s Effect 50
Electron Refraction-Bethe’s Law 60
Electrostatic Lenses 62
Electron Gun 64
The Cathode Ray Tube (CRT) 67
Limitation of Electrostatic Deflection 72
Electromagnetic Deflection Type CRT 72
Cathode Ray Oscilloscope (CRO) 74
Bainbridge Mass Spectrograph 86
Electron Microscopes 90
Cardinal Points of an Optical System 97
Huygens’ Eye-Piece 106
Ramsden’s Eye-Piece 113
Aberrations or Lens Defects 119
Achromatism of Lenses 125
Wave Theory of Light
Huygens’ Principle of Wave Propagation 154
Interference of Light 155
Young’s Experiment 156
Explanation of Wave Theory 156
Analytical Treatment of Interference 157
Coherent Sources 162
Condition for Sustained Interference of Light 163
Types of Interference 164
Division of Wavefront 164
Fresnel’s Biprism 172
Change of Phase on Reflection 185
Division of Amplitude-Interference in Thin films 187
Necessity of an Extended Source 197
Newton’s Rings 198
Newton’s Rings by Transmitted Light 203
Determination of the Wavelength of Sodium Light Using Newton’s Rings 204
Determination of Refractive Index of a Liquid 205
Haidinger’s Fringes: Fringes of Equal Inclination 209
The Michelson Interferometer 210
Diffraction of Light
Fresnel’s Diffraction 224
Diffraction at a Straight Edge 224
Explanation of Diffraction Fringes in the Illuminated Region 225
Intensity at the Edge of the Geometrical Shadow 228
Fresnel and Fraunhoffer Diffraction 230
Fraunhoffer Diffraction At a Single Slit 233
Plane Diffraction Grating (Diffraction at N parallel Slits) 239
Width of the Principal Maxima 243
The Formation of Multiple Spectra with Grating 245
The Difference between Prism and Grating Spectra 253
The Difference Between Interference and Diffraction 253
The Resolving Power of an Optical Instrument 254
The Rayleigh Criterion for the Limit of Resolution 254
The Difference Between the Dispersive Power and the Resolving Power of a Grating 257
The Resolving Power of a Prism 261
The Resolving Power of a Telescope 265
The Resolving Power of a Microscope 269
Polarization of Light
Polarization of Light Waves 277
Brewster’s Law 282
Doubly Refracting Crystals 287
Double Refraction 289
Nicol Prism 290
Huygens’ Theory of Double Refraction 293
Elliptically and Circularly Polarized Light 294
Quarter Wave Plate 298
Half Wave Plate 299
Production of Circularly and Elliptically Polarized Light 303
Nuclear Structure and Nuclear Forces
Proton-Neutron Theory of Nuclear Composition 310
Static Properties of a Nucleus 310
The Atomic Mass Unit and Mass Energy Equivalence 312
The Mass Defect and Packing Fraction 313
The Mass Difference and Nuclear Binding Energy 315
Nuclear Forces 317
Nuclear Models 319
The Semi-Empirical Mass Formula 324
The Particle Accelerator 325
Linear Particle Accelerators 327
The Lawrence Cyclotron 328
The Synchrocyclotron or Frequency Modulated Cyclotron 333
The Betatron Device 335
The Proton Synchrotron 339
Nuclear Reaction 340
Nuclear Reaction Cross-Section 341
Nuclear Fission 344
Chain Reaction 347
Nuclear Reactors 350
Nuclear Fusion 355
The Geiger-Muller Counter 356
Mass Spectrographs 359
Cosmic Rays 364
Number Systems Used in Digital Electronics
The Decimal Number System 374
The Binary System 375
The Binary to Decimal Conversion 376
Binary Fractions 377
The Double-Dadd Method 378
The Decimal to Binary Conversion 379
Shifting the Place Point 380
Binary Operations 381
Binary Addition 381
Binary Subtraction 383
The Complement of a Number 385
1′s Complement Subtraction 385
2′s Complemental Subtraction 387
Binary Multiplication 387
Binary Division 389
Shifting a Number to the Left or Right 390
Representation of Binary Numbers as Electrical Signals 391
The Octal Number System 392
Octal to Decimal Conversion 393
Decimal to Octal Conversion 393
Binary to Octal Conversion 394
Octal to Binary Conversion 395
Advantages of the Octal Number System 396
The Hexadecimal Number System 396
How to Count Beyond F in the Hex Number System 396
Binary to Hexadecimal Conversion 397
Hexadecimal to Binary Conversion 398
Decimal to Hexadecimal Conversion 398
Hexadecimal to Decimal Conversion 398
Logic Gates and Boolean Algebra
Logic Gates 405
Boolean Algebra 437
The Unique Feature of Boolean Algebra 437
Laws of Boolean Algebra 438
Equivalent Switching Circuits 440
De Morgan’s Theorem 442
Dielectric Constant 458
Energy Stored in a Capacitor 461
Induced Charge 461
Field Vectors 465
Induced Dipoles 466
Permanent Dipoles 468
Polarization-An Atomic View 471
Types of Polarization 472
Electronic Polarization 472
Ionic Polarization 473
Orientation Polarization 474
Total Polarization 475
The Clausius-Mositti Equation 479
Dielectric Loss 482
Loss Angle and Loss Tangent 483
Complex Relative Permittivity 485
Interaction of Radiation with Matter-Quantum Mechanical View 494
The Metastable State 498
The Active Medium 499
Population and Thermal Equilibrium 499
Conditions for Light Amplification 500
Population Inversion 501
Negative Absorption 501
The Principal pumping Schemes 502
Optical Resonator 504
Laser Beam Characteristics 507
Types of Lasers 509
Applications of Lasers 521
Optical Fibers 530
Propagation of Light Through a Cladded Fiber 532
Modes of Propagation 536
Types of Optical Fibers 537
Applications of Optical Fibers 541
Advantages of Optical Fibers 545
Fiber Losses 547
Optical Windows 549
Bandwidth Distance Product 551
Fiber Optic Communications 553
(Answers to Odd-Numbered Exercises) 555
(About the CD-ROM) 559
The text book touches the areas of digital control systems: analysis, stability and classical design; state variables for both continuous-time and discrete-time systems; observers and pole-placement design; Liapunov stability; optimal control; and recent advances in control systems: adaptive control, fuzzy logic control, neural network control.
EZW algorithm is a lossy compression algorithm in which bits are generated in a bit stream, according to their importance. The EZW encoder calculates a best suited threshold value for compress the still image at a specific decomposition level, followed by multilevel decomposition steps using this threshold. Normally the threshold ranges from 6 to 60, for a decomposition level of 8
Fig: Example for a three level wavelet decomposed image
ENCODING IN EZW
Natural images can be represented as a square matrix and they have a low pass spectrum. During wavelet decomposition, the energy in the sub band decreases with the scale goes lower.
For a still image the lower frequency components (smooth color variations) are more important than high frequency components (sharp edges). Here Discrete Wavelets Transforms (DWT) is used for the separation of lower frequency components from higher frequency components. Wavelets are used instead of traditional sub band coding and Discrete Cosine Transforms (DCT), because they are more effective in localizing edges. After decomposition, the lowest frequency node will have the highest coefficient value and is considered as the root node of the tree obtained after the wavelet decomposition. After decomposition of the still image, two separate lists of wavelet coefficients are obtained.
Fig. Flow chart of encoding algorithm.
During a dominant pass, coefficients with coordinates on the dominant list are compared to the threshold Ti to determine their significance. The initial threshold will be the mean of the magnitude of all the pixel values. Next threshold value will be half of the previous, and this process goes on until the threshold value decreases to 1. If a pixel value with its magnitude less than the current threshold is found, it is considered as an insignificant value. If there are no significant values in their descendants (which are wavelet coefficients of the same orientation in the same spatial location at finer scales are likely to be insignificant with respect to Threshold) of the insignificant value, then it is coded as zero trees (T). If any one of the insignificant pixels has a significant value (absolute value greater than threshold) in its descendants, it is coded as isolated zero (Z). Then coding of rest significant pixels are done such that the pixel values greater than zero are coded as positive (P) and less than that are coded as negative (N) and are given to subordinate pass for quantization process. For example consider the initial threshold is 32. ThenAfter coding, we get a coded matrix [P, N, Z, T]
Quantization is a process of approximating an infinite set of values to a finite, predictable and small set of values. Subordinate pass quantizes a number of significant pixel values to a two value alphabet which gives some idea to the decoder about the range of values where the actual pixel lies. During a subordinate pass all coefficients on the subordinate list are scanned and the specifications of the magnitudes available to the decoder are refined to an additional bit of precision. Subordinate pass includes the quantization process followed by arithmetic coding.
The decoding process is just the reversal of encoding steps. It consists of a number of steps. First is an initialization step, where a zero matrix with the same size of the coded matrix and a threshold value are initialized.
The second step is known as Principal stage. Here the codes in [P, N, Z, T] matrix is analyzed and values are assigned to the alphabets. The matrix [P, N, Z, T] consist of codes with any one of the symbols P, N, Z, T.
-If the symbol is P, then the zero at the corresponding position in the zero matrix will replaced by Tn (where Tn is the nth threshold value)
-If the symbol is N then a -Tn will replace a zero in the zero matrix
These values are again get added with to the list of processed coefficients;
-For Z and T do nothing.
Next stage is secondary stage, where we analyze the secondary matrix obtained from principal stage and modifies each 1 by adding the one- half of the previous threshold, if the coefficient is positive and subtract if coefficient is negative. Avoid zeros
Repeat these steps until the code is exhausted
EZW is a new kind of image coding technique which offers a highly compressed image after coding and high fidelity when decoded. The low size of coded data offers much easy data transportation and saves lots of storage apace. Most striking fact about it is that EZW is an advanced as well as comparatively simple algorithm. This algorithm can be simulated using MATLAB and implemented in a DSP chip after translating it into a compilation language.]]>
A touchscreen is an easy to use input device that allows users to control PC software and DVD video by touching the display screen. A touch system consists of a touch Sensor that receives the touch input, a Controller, and a Driver. The most commonly used touch technologies are the Capacitive & Resistive systems. The other technologies used in this field are Infrared technology, Near Field Imaging & SAW (surface acoustic wave technology). These technologies are latest in this field but are very much expensive.
The uses of touch systems as Graphical User Interface (GUI) devices for computers continues to grow popularity. Touch systems are used for many applications such as ATM’s, point-of–sale systems, industrial controls, casinos & public kiosks etc. Touch system is basically an alternative for a mouse or keyboard.
The market for touch system is going to be around $2.5 billion by 2004. Various companies involved in development of touch systems mainly are Philips, Samsung etc. Even touch screen mobile phones have been developed by Philips.
A touchscreen is an easy to use input device that allows users to control PC software and DVD video by touching the display screen. We manufacture and distribute a variety of touch screen related products.
A touch system consists of a touch
Sensor that receives the touch input, a Controller, and a Driver. The touch screen sensor is a clear panel that is designed to fit over a PC. When a screen is touched, the sensor detects the voltage change and passes the signal to the touch screen controller. The controller that reads & translates the sensor input into a conventional bus protocol (Serial, USB) and a software driver which converts the bus information to cursor action as well as providing systems utilities.
As the touch sensor resides between the user and the display while receiving frequent physical input from the user vacuum deposited transparent conductors serve as primary sensing element. Vacuum coated layers can account for a significant fraction of touch system cost. Cost & application parameters are chief criteria for determining the appropriate type determining the system selection. Primarily, the touch system integrator must determine with what implement the user will touch the sensor with & what price the application will support.
Applications requiring activation by a
gloved finger or arbitrary stylus such as a plastic pen will specify either a low cost resistive based sensor or a higher cost infra-red (IR) or surface acoustic wave (SAW) system. Applications anticipating bare finger input or amenable to a tethered pen comprises of the durable & fast capacitive touch systems. A higher price tag generally leads to increased durability better optical performance & larger price.
The most commonly used systems are
generally the capacitive & resistive systems. The other technologies used in this field are Infrared technology & SAW (surface acoustic wave technology) these technologies are latest in this field but are very much expensive.]]>
To avoid shoot-though in voltage source inverters (VSI), dead-time, a small interval during which both the upper and lower switches in a phase-leg are off, is introduced into the standard pulse width modulation (PWM) control of VSIs. However, such a blanking time can cause problems such as output waveform distortion and fundamental voltage loss in VSIs, especially when the output voltage is low.
To overcome dead-time effects, most solutions focus on dead-time compensation by introducing complicated PWM compensators and expensive current detection hardware. In practice, the dead-time varies with the gate drive path propagation delay, device characteristics and output current, as well as temperature, which makes the compensation less effective, especially at low output current, low frequency, and zero current crossing. Several switching strategies for PWM power converters have been proposed to minimize the dead-time effect. A dead-time minimization algorithm was also discussed earlier to improve the inverter output performance. A phase-leg configuration topology proposed prevented shoot through. However, an additional diode in series in the phase-leg increases complexity and causes more loss in the inverter. Also, this phase-leg configuration is not suitable for high-power inverters because the upper device gate turn-off voltage is reversely clamped by a diode turn on voltage. Such a low voltage, usually less than 2 V, is not enough to ensure that a device is in its off-state during the activation of its complement device.
High-power inverters usually need longer dead-time than those low-power counterparts. Also due to complicated structures and severe parasitic parameter variations, in practice, the dead-time for high-power inverters requires specific adjustment and/or compensation, and usually this process is time-consuming. For general applications, automatically eliminating dead-time by gate drive technology is a desired and complete solution. Gate drives with intelligent functions are in high demand due to the emerging technology of power electronics building blocks (PEBB) and intelligent power modules (IPM) because smart functions can improve power devices’ modularity, flexibility and reliability.
In this work, an effective dead-time elimination method is proposed. This method is based on decomposing of a generic phase-leg into two basic switching cells, which are configured with a controllable switch in series with an uncontrollable diode. Therefore, dead-time is not needed. In this paper, the effect of dead-time in VSIs will be first introduced. The principle of the proposed method to eliminate dead-time effect is explained in detail. Simulation and experimental results are provided to demonstrate the validity and features of the proposed novel method. Flexible implementation methods are also discussed.]]>
Most of our data is stored on local networks with servers that may be clustered and sharing storage. This approach has had time to be developed into stable architecture, and provide decent redundancy when deployed right. A newer emerging technology, cloud computing, has shown up demanding attention and quickly is changing the direction of the technology landscape. Whether it is Google’s unique and scalable Google File System, or Amazon’s robust Amazon S3 cloud storage model, it is clear that cloud computing has arrived with much to be gleaned from.
Cloud computing is a style of computing in which dynamically scalable and often virtualizes resources are provided as a service over the Internet. Users need not have knowledge of, expertise in, or control over the technology infrastructure in the “cloud” that supports them.
Need for large data processing
We live in the data age. It’s not easy to measure the total volume of data stored electronically, but an IDC estimate put the size of the “digital universe” at 0.18 zettabytes in 2006, and is forecasting a tenfold growth by 2011 to 1.8 zettabytes.
Some of the large data processing needed areas include:-
• The New York Stock Exchange generates about one terabyte of new trade data per day.
• Facebook hosts approximately 10 billion photos, taking up one petabyte of storage.
• Ancestry.com, the genealogy site, stores around 2.5 petabytes of data.
• The Internet Archive stores around 2 petabytes of data, and is growing at a rate of 20 terabytes per month.
• The Large Hadron Collider near Geneva, Switzerland, will produce about 15 petabytes of data per year.
The problem is that while the storage capacities of hard drives have increased massively over the years, access speeds—the rate at which data can be read from drives have not kept up. One typical drive from 1990 could store 1370 MB of data and had a transfer speed of 4.4 MB/s,§ so we could read all the data from a full drive in around five minutes. Almost 20 years later one terabyte drives are the norm, but the transfer speed is around 100 MB/s, so it takes more than two and a half hours to read all the data off the disk. This is a long time to read all data on a single drive—and writing is even slower. The obvious way to reduce the time is to read from multiple disks at once. Imagine if we had 100 drives, each holding one hundredth of the data. Working in parallel, we could read the data in under two minutes.This shows the significance of distributed computing.
Download Full ArticleHadoop.doc .doc]]>
Techalone joins with the whole world in the practice of Earth Hour. The earth hour will be held today 27th March 2010, from 8:30 pm to 9:30 pm. Earth hour is practiced to build the awareness of preserving nature in people’s mind. The first Earth hour was held at Sydney in the year 2007. About 20 lakh people participated in that Earth Hour.
In the last Earth Hour of year 2009, about 35 countries including India also took part and it saved about 1000 megawatt of Electricity in India itself. Energy saving and protection of the environment which gets heated up in every second are the message of Earth hour.]]>
The features of buzz as said from the google head quarters are
1. Friends are added automatically who you have emailed on Gmail.
2. It combines sources like Picasa and Twitter into a single feed, and it also includes full-sized photo browsing.
3. Public and private sharing so that you can decide who and what to see.
4. Inbox integration. Buzz features emails that update dynamically with all Buzz thread content.
5. Recommended Buzz -puts friend-of-friend content into your stream, even if you’re not acquainted.
It can be seen as combination of facebook and twitter .It has the private aspects of facebook and the public aspects of twitter ie we can publish our updates as public or private. In twitter and facebooks we cannot get the updates of friends of friends. In buzz they suggest a recommended buzz, by this feature we can get good buzz even if it is not from our friends. Buzz will also try to find boring buzz and automatically collapse bad buzz.]]>
The Remote Media Immersion (RMI) system is the result of a unique blend of multiple cutting-edge media technologies to create the ultimate digital media delivery platform. The main goal is to provide an immersive user experience of the highest quality. RMI encompasses all end-to-end aspects from media acquisition, storage, transmission up to their final rendering. Specifically, the Yima streaming media server delivers multiple high bandwidth streams, transmission error and flow control protocols ensure data integrity, and high-definition video combined with immersive audio provide highest quality rendering. The RMI system is operational and has been successfully demonstrated in small and large venues. Relying on the continued advances in electronics integration and residential broadband improvement, RMI demonstrates the future of on-demand home entertainment.
The charter of the Integrated Media Systems Center (IMSC) at the University of Southern California (USC) is to investigate new methods and technologies that combine multiple modalities into highly effective, immersive technologies, applications and environments. One of the results of these research efforts is the Remote Media Immersion (RMI) system. The goal of the RMI is to create and develop a complete aural and visual environment that places a participant or group of participants in a virtual space where they can experience events that occurred in different physical locations. RMI technology can effectively overcome the barriers of time and space to enable, on demand, the realistic recreation of visual and aural cues recorded in widely separated locations.
The focus of the RMI effort is to enable the most realistic recreation of an event possible while streaming the data over the Internet. Therefore, we push the technological boundaries much beyond what current video-on-demand or streaming media systems can deliver. As a consequence, high-end rendering equipment and significant transmission bandwidth are required. The RMI project integrates several technologies that are the result of research efforts at IMSC. The current operational version is based on four major components that are responsible for the acquisition, storage, transmission, and rendering of high quality media.
STAGES OF RMI
Acquisition of high-quality media streams
This authoring component is an important part of the overall chain to ensure the high quality of the rendering result as experienced by users at a later time. As the saying “garbage in, garbage out” implies, no amount of quality control in later stages of the delivery chain can make up for poorly acquired media.
Real-time digital storage and playback of multiple independent streams
Yima Scalable Streaming Media Architecture provides real-time storage, retrieval and transmission capabilities. The Yima server is based on a scalable cluster design. Each cluster node is an off-the-shelf personal computer with attached storage devices and, for example, a Fast or Gigabit Ethernet connection. The Yima server software manages the storage and network resources to provide real-time service to the multiple clients that are requesting media streams. Media types include, but are not limited to, MPEG-2 at NTSC and HDTV resolutions, multichannel audio (e.g., 10.2 channel immersive audio), and MPEG-4
Protocols for synchronized, efficient real time transmission of multiple media streams
A selective data retransmission scheme improves playback quality while maintaining realtime properties. A flow control component reduces network traffic variability and enables streams of various characteristics to be synchronized at the rendering location. Industry standard networking protocols such as Real-Time Protocol (RTP) and Real-Time Streaming Protocol (RTSP) provide compatibility with commercial systems.
Rendering of immersive audio and high resolution video
Immersive audio is a technique developed at IMSC for capturing the audio environment at a remote site and accurately reproducing the complete audio sensation and ambience at the client location with full fidelity, dynamic range and directionality for a group of listeners (16 channels of uncompressed linear PCM at a data rate of up to 17.6Mb/s). The RMI video is rendered in HDTV resolutions (1080i or 720p format) and transmitted at a rate of up to 45 Mb/s.
Download Full Article Remote Media Immersions.doc]]>