Mechanical Production Engineering

Teknik Produksi Mesin

Pada bidang keahlian teknik produksi mesin (manufacturing) mempelajari cara-cara memanufaktur suatu produk atau komponen mesin yang ditinjau dari segi teknik, proses maupun manajemennya, yang merupakan pengembangan dari mata kuliah antara lain seperti proses Manufaktur, Perencanaan Mesin Perkakas, Manajemen Operasi dan Produksi serta Perancangan Untuk  Manufaktur.

Manufacturing Engineering

Manufacturing is a field of engineering that generally deals with different practices of manufacturing; the research and development of processes, machines and equipment. It also deals with the integration of different facilities and the systems for producing quality products (with optimal expenditure) by applying the principles of physics and the study of manufacturing systems; such as the following:
  • Craft or Guild system
  • Putting-out system
  • English system of manufacturing
  • American system of manufacturing
  • Soviet collectivism in manufacturing
  • Mass production
  • Computer Integrated Manufacturing
  • Computer-aided technologies in manufacturing
  • Just In Time manufacturing
  • Lean manufacturing
  • Flexible manufacturing
  • Mass customization
  • Agile manufacturing
  • Rapid manufacturing
  • Prefabrication
  • Ownership
  • Fabrication
  • Publication
A set of six-axis robots used for welding
Manufacturing engineers work on the development and creation of physical artifacts, production
processes, and technology. The manufacturing engineering discipline has very strong overlaps with mechanical engineering, industrial engineering, electrical engineering, electronic engineering, computer science, materials management, and operations management. Their success or failure directly impacts the advancement of technology and the spread of innovation.It is a very broad area which includes the design and development of products.This field of engineering first became noticed in the mid to late 20th century , when industrialized countries introduced factories with :
1. Advanced statistical methods of quality control were introduced in factories , pioneered by the American mathematician William Edwards Deming, whom his home country initially ignored.The same methods of quality control later turned Japanese factories into world leaders in cost-effectiveness and production quality.
2. Industrial robots on the factory floor, introduced in the late 1970s. These computer-controlled welding arms and grippers could perform simple tasks such as attaching a car door quickly and flawlessly 24 hours a day. This cut costs and improved speed.

Modern developments

Modern manufacturing engineering studies includes all intermediate processes required for the production and integration of a product's components.

Some industries, such as semiconductor and steel manufacturers use the term fabrication instead for these topics.
KUKA Industrial Robots being used at a bakery for food production
Automation is used in different processes of manufacturing like machining,welding etc. Automated manufacturing refers to the application of automation to produce things in the factory way. Most of the advantages of the automation technology has its influence in the manufacture processes.The main advantage of the automated manufacturing are: higher consistency and quality, reduce the lead times, simplification of production, reduce handling, improve work flow and increase the morale of workers when a good implementation of the automation is made.
Robotics is the application of mechatronics and automation to create robots, which are often used in manufacturing to perform tasks that are dangerous, unpleasant, or repetitive. These robots may be of any shape and size, but all are preprogrammed and interact physically with the world. To create a robot, an engineer typically employs kinematics (to determine the robot's range of motion) and mechanics (to determine the stresses within the robot).Robots are used extensively in manufacturing engineering.
They allow businesses to save money on labor, perform tasks that are either too dangerous or too precise for humans to perform them economically, and to insure better quality. Many companies employ assembly lines of robots, and some factories are so robotized that they can run by themselves. Outside the factory, robots have been employed in bomb disposal, space exploration, and many other fields. Robots are also sold for various residential applications.

Education

Certification Programs in Manufacturing Engineering

Manufacturing engineers possess a Bachelor degree in engineering with major in manufacturing engineering. The length of the study for such a degree is usually four to five years and 5 more years of professional practice to qualify as a professional engineer. Manufacturing engineering technologists is a more applied qualification path.
The degrees for manufacturing engineer are usually designated as a Bachelor of Engineering [BE] or [BEng], Bachelor of Science [BS] or [BSc], and for manufacturing technologist they are Bachelor of Technology [B.TECH] or Bachelor of Applied Science [BASc] in manufacturing depending upon the university. Masters degree include Master of Engineering [ME] or [MEng] in Manufacturing, master of science [M.Sc] in manufacturing management, master of science [M.Sc] in industrial and production management, masters of science [M.Sc] as well as masters of engineering [ME] in design, which is a sub-discipline of manufacturing are available. Doctoral [PhD] or [DEng] level courses in manufacturing are also available depending on the university.
The undergraduate degree curriculum generally includes units covering physics, mathematics, computer science, project management and specific topics in mechanical and manufacturing engineering. Initially such topics cover most, if not all, of the sub-disciplines of manufacturing engineering. Students then choose to specialize in one or more sub-disciplines towards the end of the degree.

Syllabus

The curriculum for the bachelors degree in manufacturing is very similar to that of mechanical engineering,which includes :
  • Statics and dynamics
  • Strength of materials and solid mechanics
  • Instrumentation and measurement
  • Thermodynamics, heat transfer, energy conversion, and HVAC
  • Fluid mechanics and fluid dynamics
  • Mechanism design (including kinematics and dynamics)
  • Manufacturing technology or processes
  • Hydraulics and pneumatics
  • Mathematics - in particular, calculus, differential equations, statistics, and linear algebra.
  • Engineering design
  • Mechatronics and control theory
  • Material Engineering
  • Drafting, CAD (including solid modeling), and CAM etc.
A bachelor's degree in these two areas will typically have a difference of a few specialized classes only, with the exception that the Mechanical Engineering Degree is much more math intensive.

Manufacturing Engineering Certifications

Certification and Licensure:
Professional Engineer is the term for registered or licensed engineers in some countries who are permitted to offer their professional services directly to the public. Professional Engineer abbreviation (PE - USA) or (PEng - Canada) are the designations for Licensure in North America. In order to qualify for a Professional Engineer license, a candidate needs a bachelor's degree from an ABET recognized university in the USA a passing score on a state examination, and four years of work experience usually gained via a structured internship. More recent graduates have the option of dividing this licensure process in the USA into two segments. The Fundamentals of Engineering (FE) exam is often taken immediately after graduation and the Principles and Practice of Engineering exam is taken after four years of working in a chosen engineering field.
Society of Manufacturing Engineers (SME) Certifications (USA)
The SME administers qualifications specifically for the manufacturing industry. These are not degree level qualifications and are not recognized at professional engineering level. The following discussion deals with qualifications in the USA only. Qualified candidates for the Certified Manufacturing Technologist Certificate (CMfgT) must pass a three-hour, 130-question multiple-choice exam. The exam covers math, manufacturing processes, manufacturing management, automation, and related subjects. Additionally, a candidate must have at least four years of combined education and manufacturing-related work experience.
Certified Manufacturing Engineering (CMfgE) is a qualification administered by the Society of Manufacturing Engineers, Dearborn Michigan, USA. Candidates qualifying for the Certified Manufacturing Engineering Certificate must pass a three-hour, 150 question multiple-choice exam which covers more in-depth topics than the CMfgT exam. CMfgE candidates must also have eight years of combined education and manufacturing-related work experience, with a minimum of four years of work experience.
Certified Engineering Manager (CEM). The Certified Engineering Manager Certificate is also designed for engineers with eight years of combined education and manufacturing experience. The test is four hours long and has 160 multiple-choice questions. The CEM certification exam covers business processes, teamwork, responsibility and other management-related categories.

Modern tools

Many manufacturing companies, especially those in industrialized nations, have begun to incorporate
CAD model and CNC machined part
computer-aided engineering (CAE) programs into their existing design and analysis processes, including 2D and 3D solid modeling computer-aided design (CAD). This method has many benefits, including easier and more exhaustive visualization of products, the ability to create virtual assemblies of parts, and the ease of use in designing mating interfaces and tolerances.
Other CAE programs commonly used by product manufacturers include product lifecycle management (PLM) tools and analysis tools used to perform complex simulations. Analysis tools may be used to predict product response to expected loads, including fatigue life and manufacturability. These tools include finite element analysis (FEA), computational fluid dynamics (CFD), and computer-aided manufacturing (CAM).
Using CAE programs, a mechanical design team can quickly and cheaply iterate the design process to develop a product that better meets cost, performance, and other constraints. No physical prototype need be created until the design nears completion, allowing hundreds or thousands of designs to be evaluated, instead of a relative few. In addition, CAE analysis programs can model complicated physical phenomena which cannot be solved by hand, such as viscoelasticity, complex contact between mating parts, or non-Newtonian flows
As manufacturing engineering is linked with other disciplines, as seen in mechatronics, multidisciplinary design optimization (MDO) is being used with other CAE programs to automate and improve the iterative design process. MDO tools wrap around existing CAE processes, allowing product evaluation to continue even after the analyst goes home for the day. They also utilize sophisticated optimization algorithms to more intelligently explore possible designs, often finding better, innovative solutions to difficult multidisciplinary design problems.

Sub-disciplines

Mechanics
Mohr's circle, a common tool to study stresses in a mechanical element
Mechanics is, in the most general sense, the study of forces and their effect upon matter. Typically, engineering mechanics is used to analyze and predict the acceleration and deformation (both elastic and plastic) of objects under known forces (also called loads) or stresses. Subdisciplines of mechanics include
  • Statics, the study of non-moving bodies under known loads
  • Dynamics (or kinetics), the study of how forces affect moving bodies
  • Mechanics of materials, the study of how different materials deform under various types of stress
  • Fluid mechanics, the study of how fluids react to forces[20]
  • Continuum mechanics, a method of applying mechanics that assumes that objects are continuous (rather than discrete)
If the engineering project were the design of a vehicle, statics might be employed to design the frame of the vehicle, in order to evaluate where the stresses will be most intense. Dynamics might be used when designing the car's engine, to evaluate the forces in the pistons and cams as the engine cycles. Mechanics of materials might be used to choose appropriate materials for the manufacture of the frame and engine. Fluid mechanics might be used to design a ventilation system for the vehicle, or to design the intake system for the engine.

Kinematics

Kinematics is the study of the motion of bodies (objects) and systems (groups of objects), while ignoring the forces that cause the motion. The movement of a crane and the oscillations of a piston in an engine are both simple kinematic systems. The crane is a type of open kinematic chain, while the piston is part of a closed four-bar linkage. Engineers typically use kinematics in the design and analysis of mechanisms. Kinematics can be used to find the possible range of motion for a given mechanism, or, working in reverse, can be used to design a mechanism that has a desired range of motion.

Drafting

Drafting or technical drawing is the means by which manufacturers create instructions for manufacturing
A CAD model of a mechanical double seal
parts. A technical drawing can be a computer model or hand-drawn schematic showing all the dimensions necessary to manufacture a part, as well as assembly notes, a list of required materials, and other pertinent information. A U.S engineer or skilled worker who creates technical drawings may be referred to as a drafter or draftsman. Drafting has historically been a two-dimensional process, but computer-aided design (CAD) programs now allow the designer to create in three dimensions.
Instructions for manufacturing a part must be fed to the necessary machinery, either manually, through programmed instructions, or through the use of a computer-aided manufacturing (CAM) or combined CAD/CAM program. Optionally, an engineer may also manually manufacture a part using the technical drawings, but this is becoming an increasing rarity, with the advent of computer numerically controlled (CNC) manufacturing. Engineers primarily manually manufacture parts in the areas of applied spray coatings, finishes, and other processes that cannot economically or practically be done by a machine.
Drafting is used in nearly every sub-discipline of mechanical,manufacturing engineering, and by many other branches of engineering and architecture. Three-dimensional models created using CAD software are also commonly used in finite element analysis (FEA) and computational fluid dynamics (CFD).

Numerical control

Numerical control (NC) refers to the automation of machine tools that are operated by abstractl
A CNC Turning Center
programmed commands encoded on a storage medium, as opposed to manually controlled via handwheels

or levers, or mechanically automated via cams alone. The first NC machines were built in the 1940s and 1950s, based on existing tools that were modified with motors that moved the controls to follow points fed into the system on punched tape. These early servomechanisms were rapidly augmented with analog and digital computers, creating the modern computer numerical control (CNC) machine tools that have revolutionized the machining processes.
In modern CNC systems, end-to-end component design is highly
Siemens CNC panel
automated using computer-aided design (CAD) and computer-aided manufacturing (CAM) programs. The programs produce a computer file that is interpreted to extract the commands needed to operate a particular machine via a postprocessor, and then loaded into the CNC machines for production. Since any particular component might require the use of a number of different tools-drills, saws, etc., modern machines often combine multiple tools into a single "cell". In other cases, a number of different machines are used with an external controller and human or robotic operators that move the component from machine to machine. In either case, the complex series of steps needed to produce any part is highly automated and produces a part that closely matches the original CAD design.

 

 

History

Earlier forms of automation

Cams

The automation of machine tool control began in the 19th century with cams that "played" a machine tool in the way that cams had long been playing musical boxes or operating elaborate cuckoo clocks. Thomas Blanchard built his gun-stock-copying lathes (1820s-30s), and the work of people such as Christopher Miner Spencer developed the turret lathe into the screw machine (1870s). Cam-based automation had already reached a highly advanced state by World War I (1910s).
However, automation via cams is fundamentally different from numerical control because it cannot be abstractly programmed. Cams can encode information, but getting the information from the abstract level of an engineering drawing into the cam is a manual process that requires sculpting and/or machining and filing.
Various forms of abstractly programmable control had existed during the 19th century: those of the Jacquard loom, player pianos, and mechanical computers pioneered by Charles Babbage and others. These developments had the potential for convergence with the automation of machine tool control starting in that century, but the convergence did not happen until many decades later.

Tracer control

The application of hydraulics to cam-based automation resulted in tracing machines that used a stylus to trace a template, such as the enormous Pratt & Whitney "Keller Machine", which could copy templates several feet across. Another approach was "record and playback", pioneered at General Motors (GM) in the 1950s, which used a storage system to record the movements of a human machinist, and then play them back on demand. Analogous systems are common even today, notably the "teaching lathe" which gives new machinists a hands-on feel for the process. None of these were numerically programmable, however, and required a master machinist at some point in the process, because the "programming" was physical rather than numerical.

Servos and selsyns

One barrier to complete automation was the required tolerances of the machining process, which are routinely on the order of thousandths of an inch. Although connecting some sort of control to a storage device like punched cards was easy, ensuring that the controls were moved to the correct position with the required accuracy was another issue. The movement of the tool resulted in varying forces on the controls that would mean a linear input would not result in linear tool motion. The key development in this area was the introduction of the servomechanism, which produced highly accurate measurement information. Attaching two servos together produced a selsyn, where a remote servo's motions were accurately matched by another. Using a variety of mechanical or electrical systems, the output of the selsyns could be read to ensure proper movement had occurred (in other words, forming a closed-loop control system).
The first serious suggestion that selsyns could be used for machining control was made by Ernst F. W. Alexanderson, a Swedish immigrant to the U.S. working at General Electric (GE). Alexanderson had worked on the problem of torque amplification that allowed the small output of a mechanical computer to drive very large motors, which GE used as part of a larger gun laying system for US Navy ships. Like machining, gun laying requires very high accuracies, much less than a degree, and the forces during the motion of the gun turrets was non-linear. In November 1931 Alexanderson suggested to the Industrial Engineering Department that the same systems could be used to drive the inputs of machine tools, allowing it to follow the outline of a template without the strong physical contact needed by existing tools like the Keller Machine. He stated that it was a "matter of straight engineering development". However, the concept was ahead of its time from a business development perspective, and GE did not take the matter seriously until years later, when others had pioneered the field.

Parsons and the invention of NC

The birth of NC is generally credited to John T. Parsons, a machinist and salesman at his father's machining company, Parsons Corp.
In 1942 he was told that helicopters were going to be the "next big thing" by the former head of Ford Trimotor production, Bill Stout. He called Sikorsky Aircraft to inquire about possible work, and soon got a contract to build the wooden stringers in the rotor blades. After setting up production at a disused furniture factory and ramping up production, one of the blades failed and it was traced to the spar. As at least some of the problem appeared to stem from spot welding a metal collar on the stringer to the metal spar, so Parsons suggested a new method of attaching the stringers to the spar using adhesives, never before tried on an aircraft design.
That development led Parsons to consider the possibility of using stamped metal stringers instead of wood, which would be much stronger and easier to make. The stringers for the rotors were built from a design provided by Sikorsky, which was sent to Parsons as a series of 17 points defining the outline. Parsons then had to "fill in" the dots with a french curve to generate an outline they could use as a template to build the jigs for the wooden stringers. Making a metal cutting tool able to cut that particular shape proved to be difficult. Parsons went to Wright Field to see Frank Stulen, the head of the Propeller Lab Rotary Ring Branch. During their conversation, Stulen concluded that Parsons didn't really know what he was talking about. Parsons realized this, and hired Stulen on the spot. Stulen started work on 1 April 1946 and hired three new engineers to join him.
Stulen's brother worked at Curtis Wright Propeller, and mentioned that they were using punched card calculators for engineering calculations. Stulen decided to adopt the idea to run stress calculations on the rotors, the first detailed automated calculations on helicopter rotors. When Parsons saw what Stulen was doing with the punched card machines, he asked Stulen if they could be used to generate an outline with 200 points instead of the 17 they were given, and offset each point by the radius of a mill cutting tool. If you cut at each of those points, it would produce a relatively accurate cutout of the stringer even in hard steel, and it could easily be filed down to a smooth shape. The resulting tool would be useful as a template for stamping metal stringers. Stullen had no problem making such a program, and used it to produce large tables of numbers that would be taken onto the machine floor. Here, one operator read the numbers off the charts to two other operators, one on each of the X- and Y- axes, and they would move the cutting head to that point and make a cut.[4] This was called the "by-the-numbers method".
At that point Parsons conceived of a fully automated tool. With enough points on the outline, no manual working would be needed, but with manual operation, the time saved by having the part more closely match the outline was offset by the time needed to move the controls. If the machine's inputs were attached directly to the card reader, this delay, and any associated manual errors, would be removed and the number of points could be dramatically increased. Such a machine could repeatedly punch out perfectly accurate templates on command. But at the time Parsons had no funds to develop his ideas.
When one of Parsons's salesmen was on a visit to Wright Field, he was told of the problems the newly-formed US Air Force was having with new jet designs. He asked if Parsons had anything to help them. Parsons showed Lockheed their idea of an automated mill, but they were uninterested. They decided to use 5-axis template copiers to produce the stringers, cutting from a metal template, and had already ordered the expensive cutting machine. But as Parsons noted:
Now just picture the situation for a minute. Lockheed had contracted to design a machine to make these wings. This machine had five axes of cutter movement, and each of these was tracer controlled using a template. Nobody was using my method of making templates, so just imagine what chance they were going to have of making an accurate airfoil shape with inaccurate templates.
Parsons worries soon came true, and Lockheed's protests that they could fix the problem eventually rang hollow. In 1949 the Air Force arranged funding for Parsons to build his machines on his own. Early work with Snyder Machine & Tool Corp proved the system of directly driving the controls from motors failed to give the accuracy needed to set the machine for a perfectly smooth cut. Since the mechanical controls did not respond in a linear fashion, you couldn't simply drive it with a given amount of power, because the differing forces meant the same amount of power would not always produce the same amount of motion in the controls. No matter how many points you included, the outline would still be rough.

Enter MIT

This was not an impossible problem to solve, but would require some sort of feedback system, like a selsyn, to directly measure how far the controls had actually turned. Faced with the daunting task of building such a system, in the spring of 1949 Parsons turned to Gordon S. Brown's Servomechanisms Laboratory at MIT, which was a world leader in mechanical computing and feedback systems. During the war the Lab had built a number of complex motor-driven devices like the motorized gun turret systems for the Boeing B-29 Superfortress and the automatic tracking system for the SCR-584 radar. They were naturally suited to technological transfer into a prototype of Parsons's automated "by-the-numbers" machine.
The MIT team was led by William Pease assisted by James McDonough. They quickly concluded that Parsons's design could be greatly improved; if the machine did not simply cut at points A and B, but instead moved smoothly between the points, then not only would it make a perfectly smooth cut, but could do so with many fewer points - the mill could cut lines directly instead of having to define a large number of cutting points to "simulate" it. A three-way agreement was arranged between Parsons, MIT, and the Air Force, and the project officially ran from July 1949 to June 1950. The contract called for the construction of two "Card-a-matic Milling Machine"s, a prototype and a production system. Both to be handed to Parsons for attachment to one of their mills in order to develop a deliverable system for cutting stringers.
Instead, in 1950 MIT bought a surplus Cincinnati Milling Machine Company "Hydro-Tel" mill of their own and arranged a new contract directly with the Air Force that froze Parsons out of further development. Parsons would later comment that he "never dreamed that anybody as reputable as MIT would deliberately go ahead and take over my project." In spite of the development being handed to MIT, Parsons filed for a patent on "Motor Controlled Apparatus for Positioning Machine Tool" on 5 May 1952, sparking a filing by MIT for a "Numerical Control Servo-System" on 14 August 1952. Parsons received US Patent 2,820,187 on 14 January 1958, and the company sold an exclusive license to Bendix. IBM, Fujitsu and General Electric all took sub-licenses after having already started development of their own devices.

MIT's machine

MIT fit gears to the various handwheel inputs and drove them with roller chains connected to motors, one for each of the machine's three axes (X, Y, and Z). The associated controller consisted of five refrigerator-sized cabinets that, together, were almost as large as the mill they were connected to. Three of the cabinets contained the motor controllers, one controller for each motor, the other two the digital reading system.
Unlike Parsons's original punched card design, the MIT design used standard 7-track punch tape for input. Three of the tracks were used to control the different axes of the machine, while the other four encoded various control information. The tape was read in a cabinet that also housed six relay-based hardware registers, two for each axis. With every read operation the previously read point was copied into the "starting point" register, and the newly read one into the "ending point". The tape was read continually and the number in the register increased until a "stop" instruction was encountered, four holes in a line.
The final cabinet held a clock that sent pulses through the registers, compared them, and generated output pulses that interpolated between the points. For instance, if the points were far apart the output would have pulses with every clock cycle, whereas closely spaced points would only generate pulses after multiple clock cycles. The pulses are sent into a summing register in the motor controllers, counting up by the number of pulses every time they were received. The summing registers were connected to a digital to analog converter that increased power to the motors as the count in the registers increased.
The registers were decremented by encoders attached to the motors and the mill itself, which would reduce the count by one for every one degree of rotation. Once the second point was reached the pulses from the clock would stop, and the motors would eventually drive the mill to the encoded position. Each 1 degree rotation of the controls produced a 0.0005 inch movement of the cutting head. The programmer could control the speed of the cut by selecting points that were closer together for slow movements, or further apart for rapid ones.
The system was publicly demonstrated in September 1952, appearing in that month's Scientific American. MIT's system was an outstanding success by any technical measure, quickly making any complex cut with extremely high accuracy that could not easily be duplicated by hand. However, the system was terribly complex, including 250 vacuum tubes, 175 relays and numerous moving parts, reducing its reliability in a production environment. It was also very expensive, the total bill presented to the Air Force was $360,000.14, $2,641,727.63 in 2005 dollars. Between 1952 and 1956 the system was used to mill a number of one-off designs for various aviation firms, in order to study their potential economic impact.

Proliferation of NC

The Air Force funding for the project ran out in 1953, but development was picked up by the Giddings and Lewis Machine Tool Co. In 1955 many of the MIT team left to form Concord Controls, a commercial NC company with Giddings' backing, producing the Numericord controller. Numericord was similar to the MIT design, but replaced the punch tape with a magnetic tape reader that General Electric was working on. The tape contained a number of signals of different phases, which directly encoded the angle of the various controls. The tape was played at a constant speed in the controller, which set its half of the selsyn to the encoded angles while the remote side was attached to the machine controls. Designs were still encoded on paper tape, but the tapes were transferred to a reader/writer that converted them into magnetic form. The magtapes could then be used on any of the machines on the floor, where the controllers were greatly reduced in complexity. Developed to produce highly accurate dies for an aircraft skinning press, the Numericord "NC5" went into operation at G&L's plant at Fond du Lac, WI in 1955.
Monarch Machine Tool also developed an numerical controlled lathe, starting in 1952. They demonstrated their machine at the 1955 Chicago Machine Tool Show (predecessor of today's IMTS), along with a number of other vendors with punched card or paper tape machines that were either fully developed or in prototype form. These included Kearney & Trecker’s Milwaukee-Matic II that could change its cutting tool under numerical control, a common feature on modern machines.
A Boeing report noted that "numerical control has proved it can reduce costs, reduce lead times, improve quality, reduce tooling and increase productivity.” In spite of these developments, and glowing reviews from the few users, uptake of NC was relatively slow. As Parsons later noted:
The NC concept was so strange to manufacturers, and so slow to catch on, that the US Army itself finally had to build 120 NC machines and lease them to various manufacturers to begin popularizing its use.
In 1958 MIT published its report on the economics of NC. They concluded that the tools were competitive with human operators, but simply moved the time from the machining to the creation of the tapes. In Forces of Production, Noble claims that this was the whole point as far as the Air Force was concerned; moving the process off of the highly unionized factory floor and into the un-unionized white collar design office. The cultural context of the early 1950s, a second Red Scare with a widespread fear of a bomber gap and of domestic subversion, sheds light on this interpretation. It was strongly feared that the West would lose the defense production race to the Communists, and that syndicalist power was a path toward losing, either by "getting too soft" (less output, greater unit expense) or even by Communist sympathy and subversion within unions (arising from their common theme of empowering the working class).

CNC arrives

Many of the commands for the experimental parts were programmed "by hand" to produce the punch tapes that were used as input. During the development of Whirlwind, MIT's real-time computer, John Runyon coded a number of subroutines to produce these tapes under computer control. Users could enter a list of points and speeds, and the program would generate the punch tape. In one instance, this process reduced the time required to produce the instruction list and mill the part from 8 hours to 15 minutes. This led to a proposal to the Air Force to produce a generalized "programming" language for numerical control, which was accepted in June 1956.
Starting in September, Ross and Pople outlined a language for machine control that was based on points and lines, developing this over several years into the APT programming language. In 1957 the Aircraft Industries Association (AIA) and Air Material Command at Wright-Patterson Air Force Base joined with MIT to standardize this work and produce a fully computer-controlled NC system. On 25 February 1959 the combined team held a press conference showing the results, including a 3D machined aluminum ash tray that was handed out in the press kit.
Meanwhile, Patrick Hanratty was making similar developments at GE as part of their partnership with G&L on the Numericord. His language, PRONTO, beat APT into commercial use when it was released in 1958. Hanratty then went on to develop MICR magnetic ink characters that were used in cheque processing, before moving to General Motors to work on the groundbreaking DAC-1 CAD system.
APT was soon extended to include "real" curves in 2D-APT-II. With its release, MIT reduced its focus on CNC as it moved into CAD experiments. APT development was picked up with the AIA in San Diego, and in 1962, by Illinois Institute of Technology Research. Work on making APT an international standard started in 1963 under USASI X3.4.7, but many manufacturers of CNC machines had their own one-off additions (like PRONTO), so standardization was not completed until 1968, when there were 25 optional add-ins to the basic system.
Just as APT was being released in the early 1960s, a second generation of lower-cost transistorized computers was hitting the market that were able to process much larger volumes of information in production settings. This reduced the cost of implementing a NC system and by the mid 1960s, APT runs accounted for a third of all computer time at large aviation firms.

CAD meets CNC

While the Servomechanisms Lab was in the process of developing their first mill, in 1953, MIT's Mechanical Engineering Department dropped the requirement that undergraduates take courses in drawing. The instructors formerly teaching these programs were merged into the Design Division, where an informal discussion of computerized design started. Meanwhile the Electronic Systems Laboratory, the newly rechristened Servomechanisms Laboratory, had been discussing whether or not design would ever start with paper diagrams in the future.
In January 1959, an informal meeting was held involving individuals from both the Electronic Systems Laboratory and the Mechanical Engineering Department's Design Division. Formal meetings followed in April and May, which resulted in the "Computer-Aided Design Project". In December 1959, the Air Force issued a one year contract to ESL for $223,000 to fund the project, including $20,800 earmarked for 104 hours of computer time at $200 per hour. This proved to be far too little for the ambitious program they had in mind, although their engineering calculation system, AED, was released in March 1965.
In 1959, General Motors started an experimental project to digitize, store and print the many design sketches being generated in the various GM design departments. When the basic concept demonstrated that it could work, they started the DAC-1 project with IBM to develop a production version. One part of the DAC project was the direct conversion of paper diagrams into 3D models, which were then converted into APT commands and cut on milling machines. In November 1963 a trunk lid design moved from 2D paper sketch to 3D clay prototype for the first time. With the exception of the initial sketch, the design-to-production loop had been closed.
Meanwhile, MIT's offsite Lincoln Labs was building computers to test new transistorized designs. The ultimate goal was essentially a transistorized Whirlwind known as TX-2, but in order to test various circuit designs a smaller version known as TX-0 was built first. When construction of TX-2 started, time in TX-0 freed up and this led to a number of experiments involving interactive input and use of the machine's CRT display for graphics. Further development of these concepts led to Ivan Sutherland's groundbreaking Sketchpad program on the TX-2.
Sutherland moved to the University of Utah after his Sketchpad work, but it inspired other MIT graduates to attempt the first true CAD system. It was Electronic Drafting Machine (EDM), sold to Control Data and known as "Digigraphics", that Lockheed used to build production parts for the C-5 Galaxy, the first example of an end-to-end CAD/CNC production system.
By 1970 there were a wide variety of CAD firms including Intergraph, Applicon, Computervision, Auto-trol Technology, UGS Corp. and others, as well as large vendors like CDC and IBM.

Proliferation of CNC

The price of computer cycles fell drastically during the 1960s with the widespread introduction of useful minicomputers. Eventually it became less expensive to handle the motor control and feedback with a computer program than it was with dedicated servo systems. Small computers were dedicated to a single mill, placing the entire process in a small box. PDP-8's and Data General Nova computers were common in these roles. The introduction of the microprocessor in the 1970s further reduced the cost of implementation, and today almost all CNC machines use some form of microprocessor to handle all operations.
The introduction of lower-cost CNC machines radically changed the manufacturing industry. Curves are as easy to cut as straight lines, complex 3-D structures are relatively easy to produce, and the number of machining steps that required human action have been dramatically reduced. With the increased automation of manufacturing processes with CNC machining, considerable improvements in consistency and quality have been achieved with no strain on the operator. CNC automation reduced the frequency of errors and provided CNC operators with time to perform additional tasks. CNC automation also allows for more flexibility in the way parts are held in the manufacturing process and the time required to change the machine to produce different components.
During the early 1970s the Western economies were mired in slow economic growth and rising employment costs, and NC machines started to become more attractive. The major U.S. vendors were slow to respond to the demand for machines suitable for lower-cost NC systems, and into this void stepped the Germans. In 1979, sales of German machines surpassed the U.S. designs for the first time. This cycle quickly repeated itself, and by 1980 Japan had taken a leadership position, U.S. sales dropping all the time. Once sitting in the #1 position in terms of sales on a top-ten chart consisting entirely of U.S. companies in 1971, by 1987 Cincinnati Milacron was in 8th place on a chart heavily dominated by Japanese firms.
Many researchers have commented that the U.S. focus on high-end applications left them in an uncompetitive situation when the economic downturn in the early 1970s led to greatly increased demand for low-cost NC systems. Unlike the U.S. companies, who had focused on the highly profitable aerospace market, German and Japanese manufacturers targeted lower-profit segments from the start and were able to enter the low-cost markets much more easily.
As computing and networking evolved, so did direct numerical control (DNC). Its long-term coexistence with less networked variants of NC and CNC is explained by the fact that individual firms tend to stick with whatever is profitable, and their time and money for trying out alternatives is limited. This explains why machine tool models and tape storage media persist in grandfathered fashion even as the state of the art advances.

DIY, hobby, and personal CNC

Recent developments in small scale CNC have been enabled, in large part, by the Enhanced Machine Controller project from the National Institute of Standards and Technology (NIST), an agency of the US Government's Department of Commerce. EMC is a public domain program operating under the Linux operating system and working on PC based hardware. After the NIST project ended, development continued, leading to EMC2 which is licensed under the GNU General Public License and Lesser GNU General Public License (GPL and LGPL). Derivations of the original EMC software have also led to several proprietary PC based programs notably TurboCNC, and Mach3, as well as embedded systems based on proprietary hardware. The availability of these PC based control programs has led to the development of DIY CNC, allowing hobbyists to build their own using open source hardware designs. The same basic architecture has allowed manufacturers, such as Sherline and Taig, to produce turnkey lightweight desktop milling machines for hobbyists.
The easy availability of PC based software and support information of Mach3, written by Art Fenerty, lets anyone with some time and technical expertise make complex parts for home and prototype use. Fenerty is considered a principal founder of Windows-based PC CNC machining.
Eventually, the homebrew architecture was fully commercialized and used to create larger machinery suitable for commercial and industrial applications. This class of equipment has been referred to as Personal CNC. Parallel to the evolution of personal computers, Personal CNC has its roots in EMC and PC based control, but has evolved to the point where it can replace larger conventional equipment in many instances. As with the Personal Computer, Personal CNC is characterized by equipment whose size, capabilities, and original sales price make it useful for individuals, and which is intended to be operated directly by an end user, often without professional training in CNC technology.

Today

Although modern data storage techniques have moved on from punch tape in almost every other role, tapes are still relatively common in CNC systems. Several reasons explain this. One is easy backward compatibility of existing programs. Companies were spared the trouble of re-writing existing tapes into a new format. Another is the principle, mentioned earlier, that individual firms tend to stick with whatever is profitable, and their time and money for trying out alternatives is limited. A small firm that has found a profitable niche may keep older equipment in service for years because "if it ain't broke [profitability-wise], don't fix it." Competition places natural limits on that approach, as some amount of innovation and continuous improvement eventually becomes necessary, lest competitors be the ones who find the way to the "better mousetrap".
One change that was implemented fairly widely was the switch from paper to mylar tapes, which are much more mechanically robust. Floppy disks, USB flash drives and local area networking have replaced the tapes to some degree, especially in larger environments that are highly integrated.
The proliferation of CNC led to the need for new CNC standards that were not encumbered by licensing or particular design concepts, like APT. A number of different "standards" proliferated for a time, often based around vector graphics markup languages supported by plotters. One such standard has since become very common, the "G-code" that was originally used on Gerber Scientific plotters and then adapted for CNC use. The file format became so widely used that it has been embodied in an EIA standard. In turn, while G-code is the predominant language used by CNC machines today, there is a push to supplant it with STEP-NC, a system that was deliberately designed for CNC, rather than grown from an existing plotter standard.
While G-code is the most common method of programming, some machine-tool/control manufacturers also have invented their own proprietary "conversational" methods of programming, trying to make it easier to program simple parts and make set-up and modifications at the machine easier (such as Mazak's Mazatrol and Hurco). These have met with varying success.
A more recent advancement in CNC interpreters is support of logical commands, known as parametric programming (also known as macro programming). Parametric programs include both device commands as well as a control language similar to BASIC. The programmer can make if/then/else statements, loops, subprogram calls, perform various arithmetic, and manipulate variables to create a large degree of freedom within one program. An entire product line of different sizes can be programmed using logic and simple math to create and scale an entire range of parts, or create a stock part that can be scaled to any size a customer demands.
Since about 2006, the idea has been suggested and pursued to foster the convergence with CNC and DNC of several trends elsewhere in the world of information technology that have not yet much affected CNC and DNC. One of these trends is the combination of greater data collection (more sensors), greater and more automated data exchange (via building new, open industry-standard XML schemas), and data mining to yield a new level of business intelligence and workflow automation in manufacturing. Another of these trends is the emergence of widely published APIs together with the aforementioned open data standards to encourage an ecosystem of user-generated apps and mashups, which can be both open and commercial—in other words, taking the new IT culture of app marketplaces that began in web development and smartphone app development and spreading it to CNC, DNC, and the other factory automation systems that are networked with the CNC/DNC. MTConnect is a leading effort to bring these ideas into successful implementation.

Description

Modern CNC mills differ little in concept from the original model built at MIT in 1952. Mills typically consist of a table that moves in the X and Y axes, and a tool spindle that moves in the Z (depth). The position of the tool is driven by motors through a series of step-down gears in order to provide highly accurate movements, or in modern designs, direct-drive stepper motors. Closed-loop control is not mandatory today, as open-loop control works as long as the forces are kept small enough.
As the controller hardware evolved, the mills themselves also evolved. One change has been to enclose the entire mechanism in a large box as a safety measure, often with additional safety interlocks to ensure the operator is far enough from the working piece for safe operation. Most new CNC systems built today are completely electronically controlled.
CNC-like systems are now used for any process that can be described as a series of movements and operations. These include laser cutting, welding, friction stir welding, ultrasonic welding, flame and plasma cutting, bending, spinning, pinning, gluing, fabric cutting, sewing, tape and fiber placement, routing, picking and placing (PnP), and sawing.

Tools with CNC variants

  • Drills
  • EDMs
  • Lathes
  • Milling machines
  • Wood routers
  • Sheet metal works (Turret Punch)
  • Wire bending machines
  • Hot-wire foam cutters
  • Plasma cuttings
  • Water jet cutters
  • Laser cutting
  • Oxy-fuel
  • Surface grinders
  • Cylindrical grinders
  • 3D Printing
  • Induction hardening machines

Tool / machine crashing

In CNC, a "crash" occurs when the machine moves in such a way that is harmful to the machine, tools, or parts being machined, sometimes resulting in bending or breakage of cutting tools, accessory clamps, vises, and fixtures, or causing damage to the machine itself by bending guide rails, breaking drive screws, or causing structural components to crack or deform under strain. A mild crash may not damage the machine or tools, but may damage the part being machined so that it must be scrapped.
Many CNC tools have no inherent sense of the absolute position of the table or tools when turned on. They must be manually "homed" or "zeroed" to have any reference to work from, and these limits are just for figuring out the location of the part to work with it, and aren't really any sort of hard motion limit on the mechanism. It is often possible to drive the machine outside the physical bounds of its drive mechanism, resulting in a collision with itself or damage to the drive mechanism.
Many CNC tools also don't know anything about their working environment. They often lack any form of sensory capability to detect problems with the machining process, and will not abort if something goes wrong. They blindly follow the machining code provided and it is up to an operator to detect if a crash is either occurring or about to occur, and for the operator to manually abort the cutting process.
If the drive system is weaker than the machine structural integrity, then the drive system simply pushes against the obstruction and the drive motors "slip in place". The machine tool may not detect the collision or the slipping, so for example the tool should now be at 210mm on the X axis but is in fact at 32mm where it hit the obstruction and kept slipping. All of the next tool motions will be off by -178mm on the X axis, and all future motions are now invalid, which may result in further collisions with clamps, vises, or the machine itself.
Collision detection and avoidance is possible, through the use of absolute position sensors (optical encoder strips or disks) to verify that motion occurred, or torque sensors or power-draw sensors on the drive system to detect abnormal strain when the machine should just be moving and not cutting, but these are not a common component of most CNC tools.
Instead, most CNC tools simply rely on the assumed accuracy of stepper motors that rotate a specific number of degrees in response to magnetic field changes. It is often assumed the stepper is perfectly accurate and never mis-steps, so tool position monitoring simply involves counting the number of pulses sent to the stepper over time. An alternate means of stepper position monitoring is usually not available, so crash or slip detection is not possible.

Numerical accuracy vs Equipment backlash

Within the numerical systems of CNC programming it is possible for the code generator to assume that the controlled mechanism is always perfectly accurate, or that accuracy tolerances are identical for all cutting or movement directions. This is not always a true condition of CNC tools.
CNC tools with a large amount of mechanical backlash can still be highly accurate if the drive or cutting mechanism is only driven so as to apply cutting force from one direction, and all driving systems are pressed tight together in that one cutting direction. However a CNC device with high backlash and a dull cutting tool can lead to cutter chatter and possible workpiece gouging. Backlash also affects accuracy of some operations involving axis movement reversals during cutting, such as the milling of a circle, where axis motion is sinusoidal. However, this can be compensated for if the amount of backlash is precisely known by linear encoders or manual measurement.
The high backlash mechanism itself is not necessarily relied on to be repeatably accurate for the cutting process, but some other reference object or precision surface may be used to zero the mechanism, by tightly applying pressure against the reference and setting that as the zero reference for all following CNC-encoded motions. This is similar to the manual machine tool method of clamping a micrometer onto a reference beam and adjusting the vernier dial to zero using that object as the reference.

Design for manufacturability for CNC machining

Design for manufacturability (DFM) describes the process of designing or engineering a product in order to facilitate the manufacturing process in order to reduce its manufacturing costs. DFM will allow potential problems to be fixed in the design phase which is the least expensive place to address them. The design of the component can have an enormous effect on the cost of manufacturing. Other factors may affect the manufacturability such as the type of raw material, the form of the raw material, dimensional tolerances, and secondary processing such as finishing.


Material type

The most easily machined types of metals include aluminum, brass, magnesium, and softer metals. As materials get harder, denser and stronger, such as steel, stainless steel, titanium, and exotic alloys, they become much harder to machine and take much longer, thus being less manufacturable. Most types of plastic are easy to machine, although additions of fiberglass or carbon fiber can reduce the machinability. Plastics that are particularly soft and gummy may have machinability problems of their own.

Material form

Metals come in all forms. In the case of aluminum as an example, bar stock and plate are the two most common forms from which machined parts are made. The size and shape of the component may determine which form of material must be used. It is common for engineering drawings to specify one form over the other. Bar stock is generally close to 1/2 of the cost of plate on a per pound basis. So although the material form isn't directly related to the geometry of the component, cost can be removed at the design stage by specifying the least expensive form of the material.

Tolerances

A significant contributing factor to the cost of a machined component is the geometric tolerance to which the features must be made. The tighter the tolerance required, the more expensive the component will be to machine. When designing, specify the loosest tolerance that will serve the function of the component. Tolerances must be specified on a feature by feature basis. There are creative ways to engineer components with lower tolerances that still perform as well as ones with higher tolerances.

Design and shape

As machining is a subtractive process, the time to remove the material is a major factor in determining the machining cost. The volume and shape of the material to be removed as well as how fast the tools can be fed will determine the machining time. When using milling cutters, the strength and stiffness of the tool which is determined in part by the length to diameter ratio of the tool will play the largest role in determining that speed. The shorter the tool is relative to its diameter the faster it can be fed through the material. A ratio of 3:1 (L:D) or under is optimum. If that ratio cannot be achieved, a solution like this depicted here can be used. For holes, the length to diameter ratio of the tools are less critical, but should still be kept under 10:1.
There are many other types of features which are more or less expensive to machine. Generally chamfers cost less to machine than radii on outer horizontal edges. Undercuts are more expensive to machine. Features that require smaller tools, regardless of L:D ratio, are more expensive.

 

Computer-aided technologies

Computer-aided technologies (CAx) is a broad term that means the use of computer technology to aid in the design, analysis, and manufacture of products.
Advanced CAx tools merge many different aspects of the product lifecycle management (PLM), including design, finite element analysis (FEA), manufacturing, production planning, product testing with virtual lab models and visualization, product documentation, product support, etc. CAx encompasses a broad range of tools, both those commercially available and those proprietary to individual engineering firms.
The term CAD/CAM (computer-aided design and computer-aided manufacturing) is also often used in the context of a software tool that covers a number of engineering functions.

List of computer-aided technologies

  • Computer-aided design (CAD)
    Illustration of the interaction of the various computer-aided technologies
  • Computer-aided architectural design (CAAD)



  • Computer-aided design and drafting (CADD)
    CAx tools in the context of product lifecycle managemen



  • Computer-aided process planning(CAPP)
  • Computer-aided quality assurance (CAQ)
  • Computer-aided reporting (CAR)
    Simulation of airflow over an en
  • Computer-aided requirements capture (CAR)
  • Computer-aided rule definition (CARD)
  • Computer-aided rule execution (CARE)
  • Computer-aided software engineering (CASE)
  • Component information system (CIS)
  • Computer-integrated manufacturing (CIM)
  • computer numerical controlled (CNC)
  • Computational fluid dynamics (CFD)
  • Electronic design automation (EDA)
  • Enterprise resource planning (ERP)
  • Finite element analysis (FEA)
  • Knowledge-based engineering (KBE)
  • Manufacturing process management (MPM)
  • Manufacturing process planning (MPP)
  • Material requirements planning (MRP)
  • Manufacturing resource planning (MRP II)
  • Product data management (PDM)
    A CAD model
  • Product lifecycle management (PLM)
  • Computer-aided manufacturing (CAM)

Mechatronics

It is an engineering discipline which deals with the convergence of electrical and mechanical and
Training FMS with learning robot SCORBOT-ER 4u, workbench CNC Mill and CNC Lathe
manufacturing systems. Such combined systems are known as electromechanical systems and have widespread adoption. Examples include automated manufacturing systems, heating, ventilation and air-conditioning systems and various subsystems of aircraft and automobiles.
The term mechatronics is typically used to refer to macroscopic systems but futurists have predicted the emergence of very small electromechanical devices. Already such small devices, known as Microelectromechanical systems (MEMS), are used in automobiles to tell airbags when to deploy, in digital projectors to create sharper images and in inkjet printers to create nozzles for high definition printing. In the future it is hoped the devices will help build tiny implantable medical devices and improve optical communication.

Work

Manufacturing engineering is just one facet of the engineering industry. Manufacturing engineers enjoy improving the production process from start to finish. They have the ability to keep the whole production process in mind as they zero in on a particular portion of the process. Successful students in manufacturing engineering degree programs are inspired by the notion of starting with a natural resource, such as a block of wood, and ending with a usable, valuable product, such as a desk.
The manufacturing engineers are closely connected with engineering and industrial design. Examples of major companies which employee manufacturing engineers in the United States include General Motors Corporation, Ford Motor Company, Chrysler, Boeing, Gates Corporation and Pfizer. Examples in Europe include Airbus, Daimler, BMW, Fiat, and Michelin Tyre.
Some industries where manufacturing engineers are generally employed:
  • Aerospace industry
  • Automotive industry
  • Chemical industry
  • Computer industry
  • Electronics industry
  • Food processing industry
  • Garment industry
  • Pharmaceutical industry
  • Pulp and paper industry
  • Toy industry

Frontiers of research

Flexible Manufacturing Systems



A typical FMS systems
A flexible manufacturing system (FMS) is a manufacturing system in which there is some amount of flexibility that allows the system to react in the case of changes, whether predicted or unpredicted. This flexibility is generally considered to fall into two categories, which both contain numerous subcategories. The first category, machine flexibility, covers the system's ability to be changed to produce new product types, and ability to change the order of operations executed on a part. The second category is called routing flexibility, which consists of the ability to use multiple machines to perform the same operation on a part, as well as the system's ability to absorb large-scale changes, such as in volume, capacity, or capability. Most FMS systems comprise of three main systems. The work machines which are often automated CNC machines are connected by a material handling system to optimize parts flow and the central control computer which controls material movements and machine flow. The main advantages of an FMS is its high flexibility in managing manufacturing resources like time and effort in order to manufacture a new product. The best application of an FMS is found in the production of small sets of products like those from a mass production.

Computer integrated manufacturing

Computer-integrated manufacturing (CIM) in engineering is a method of manufacturing in which the entire production process is controlled by computer. The traditional separated process methods are joined through a computer by CIM. This integration allows that the processes exchange information with each other and they are able to initiate actions. Through this integration, manufacturing can be faster and less error-prone, although the main advantage is the ability to create automated manufacturing processes. Typically CIM relies on closed-loop control processes, based on real-time input from sensors. It is also known as flexible design and manufacturing.

Friction stir welding

Friction stir welding, a new type of welding, was discovered in 1991 by The Welding Institute (TWI). This
Close-up view of a friction stir weld tack tool
innovative steady state (non-fusion) welding technique joins materials previously un-weldable, including several aluminum alloys. It may play an important role in the future construction of airplanes, potentially replacing rivets. Current uses of this technology to date include welding the seams of the aluminum main Space Shuttle external tank, Orion Crew Vehicle test article, Boeing Delta II and Delta IV Expendable Launch Vehicles and the SpaceX Falcon 1 rocket, armor plating for amphibious assault ships, and welding the wings and fuselage panels of the new Eclipse 500 aircraft from Eclipse Aviation among an increasingly growing pool of uses.
The Other Areas of Research are :
Product Design, MEMS (Micro-Electro-Mechanical Systems), Lean Manufacturing,Intelligent Manufacturing Systems,Green Manufacturing, Precision Engineering,Smart Materials etc.


Mechanics

Mechanics is the branch of physics concerned with the behavior of physical bodies when subjected to forces or displacements, and the subsequent effects of the bodies on their environment. The discipline has its roots in several ancient civilizations (see History of classical mechanics and Timeline of classical mechanics). During the early modern period, scientists such as Galileo, Kepler, and especially Newton, laid the foundation for what is now known as classical mechanics.
The system of study of mechanics is shown in the table below:
Branches of mechanics

Classical versus quantum

The major division of the mechanics discipline separates classical mechanics from quantum mechanics.
Historically, classical mechanics came first, while quantum mechanics is a comparatively recent invention. Classical mechanics originated with Isaac Newton's Laws of motion in Principia Mathematica, while quantum mechanics didn't appear until 1900. Both are commonly held to constitute the most certain knowledge that exists about physical nature. Classical mechanics has especially often been viewed as a model for other so-called exact sciences. Essential in this respect is the relentless use of mathematics in theories, as well as the decisive role played by experiment in generating and testing them.
Quantum mechanics is of a wider scope, as it encompasses classical mechanics as a sub-discipline which applies under certain restricted circumstances. According to the correspondence principle, there is no contradiction or conflict between the two subjects, each simply pertains to specific situations. The correspondence principle states that the behavior of systems described by quantum theories reproduces classical physics in the limit of large quantum numbers. Quantum mechanics has superseded classical mechanics at the foundational level and is indispensable for the explanation and prediction of processes at molecular and (sub)atomic level. However, for macroscopic processes classical mechanics is able to solve problems which are unmanageably difficult in quantum mechanics and hence remains useful and well used. Modern descriptions of such behavior begin with a careful definition of such quantities as displacement (distance moved), time, velocity, acceleration, mass, and force. Until about 400 years ago, however, motion was explained from a very different point of view. For example, following the ideas of Greek philosopher and scientist Aristotle, scientists reasoned that a cannonball falls down because its natural position is in the earth; the sun, the moon, and the stars travel in circles around the earth because it is the nature of heavenly objects to travel in perfect circles.
The Italian physicist and astronomer Galileo brought together the ideas of other great thinkers of his time and began to analyze motion in terms of distance traveled from some starting position and the time that it took. He showed that the speed of falling objects increases steadily during the time of their fall. This acceleration is the same for heavy objects as for light ones, provided air friction (air resistance) is discounted. The English mathematician and physicist Sir Isaac Newton improved this analysis by defining force and mass and relating these to acceleration. For objects traveling at speeds close to the speed of light, Newton’s laws were superseded by Albert Einstein’s theory of relativity. For atomic and subatomic particles, Newton’s laws were superseded by quantum theory. For everyday phenomena, however, Newton’s three laws of motion remain the cornerstone of dynamics, which is the study of what causes motion.

 
Einsteinian versus Newtonian

Analogous to the quantum versus classical reformation, Einstein's general and special theories of relativity have expanded the scope of mechanics beyond the mechanics of Newton and Galileo, and made fundamental corrections to them, that become significant and even dominant as speeds of material objects approach the speed of light, which cannot be exceeded.
For example,
In Newtonian Mechanics,
F=ma
whereas in Einsteinian Mechanics and Lorentz Transformations, which were first discovered by Hendrick Lorentz,
F=γma
where γ is the Lorentz Factor

Einsteinian vs. Quantum

Relativistic corrections are also needed for quantum mechanics, although General relativity has not been integrated; the two theories remain incompatible, a hurdle which must be overcome in developing the Grand Unified Theory or Theory of Everything.

Antiquity 

The main theory of mechanics in antiquity was Aristotelian mechanics. A later developer in this tradition was Hipparchus.

Aristotelian Physics the natural sciences, are described in the works of the Greek philosopher Aristotle (384 BC – 322 BC). In the Physics, Aristotle established general principles of change that govern all natural bodies; both living and inanimate, celestial and terrestrial—including all motion, change in respect to place, change in respect to size or number, qualitative change of any kind, and coming to be and passing away. As Martin Heidegger, one of the foremost philosophers of the twentieth century, once wrote,
Aristotelian "physics" is different from what we mean today by this word, not only to the extent that it belongs to antiquity whereas the modern physical sciences belong to modernity, rather above all it is different by virtue of the fact that Aristotle's "physics" is philosophy, whereas modern physics is a positive science that presupposes a philosophy.... This book determines the warp and woof of the whole of Western thinking, even at that place where it, as modern thinking, appears to think at odds with ancient thinking. But opposition is invariably comprised of a decisive, and often even perilous, dependence. Without Aristotle's Physics there would have been no Galileo.
To Aristotle, physics is a broad term that includes all nature sciences, such as philosophy of mind, body, sensory experience, memory and biology, and constitutes the foundational thinking underlying many of his works.

Ancient concepts

Some concepts involved in Aristotle's physics are:
  1. Teleology: Aristotle observes that natural things tend toward definite goals or ends insofar as they are natural. Regularities manifest a rudimentary kind of teleology.
  2. Natural motion: Terrestrial objects tend toward a different part of the universe according to their composition of the four elements. For example, earth, the heaviest element, tends toward the center of the universe—hence the reason for the Earth being at the center. At the opposite extreme the lightest element, fire, tends upward, away from the center. The relative proportion of the four elements composing an object determines its motion. The elements are not proper substances in Aristotelian theory or the modern sense of the word. Refining an arbitrarily pure sample of an element isn't possible; They were abstractions; one might consider an arbitrarily pure sample of a terrestrial substance having a large ratio of one element relative to the others.
  3. Terrestrial motion: Terrestrial objects move downward or upward toward their natural place. Motion from side to side results from the turbulent collision and sliding of the objects as well as transformations between the elements, (generation and corruption).
  4. Rectilinear motion: Ideal terrestrial motion would proceed straight up or straight down at constant speed. Celestial motion is always ideal, it is circular and its speed is constant.
  5. Speed, weight and resistance: The ideal speed of a terrestrial object is directly proportional to its weight. In nature, however, the matter obstructing an object's path is a limiting factor that's inversely proportional to the viscosity of the medium.
  6. Vacuum isn't possible: Vacuum doesn't occur, but hypothetically, terrestrial motion in a vacuum would be indefinitely fast.
  7. Continuum: Aristotle argues against the indivisibles of Democritus (which differ considerably from the historical and the modern use of the term atom).
  8. Aether: The "greater and lesser lights of heaven", (the sun, moon, planets and stars), are embedded in perfectly concentric crystal spheres that rotate eternally at fixed rates. Because the spheres never change and (meteorites notwithstanding) don't fall down or rise up from the ground, they cannot be composed of the four terrestrial elements. Much as Homer's æthere (αἰθήρ), the "pure air" of Mount Olympus was the divine counterpart of the air (άήρ, aer) breathed by mortals, the celestial spheres are composed of a special element, eternal and unchanging, with circular natural motion.
  9. Terrestrial change:

    The four terrestrial elements
    Unlike the eternal and unchanging celestial aether, each of the four terrestrial elements are capable of changing into either of the two elements they share a property with: e.g. the cold and wet (water) can transform into the hot and wet (air) or the cold and dry (earth) and any apparent change into the hot and dry (fire) is actually a two step process. These properties are predicated of an actual substance relative to the work it's able to do; that of heating or chilling and of desiccating or moistening. The four elements exist only with regard to this capacity and relative to some potential work. The celestial element is eternal and unchanging, so only the four terrestrial elements account for coming to be and passing away; also called "generation and corruption" after the Latin title of Aristotle's De Generatione et Corruptione (Περὶ γενέσεως καὶ φθορᾶ).
  10. Celestial motion: The crystal spheres carrying the sun, moon and stars move eternally with unchanging circular motion. They're composed of solid aether and no gaps exist between the spheres. Spheres are embedded within spheres to account for the wandering stars, (i.e. the modern planets, which appear to move erratically in comparison to the sun, moon and stars). Later, the belief that all spheres are concentric was forsaken in favor of Ptolemy's deferent and epicycle. Aristotle submits to the calculations of astronomers regarding the total number of spheres and various accounts give a number in the neighborhood of 50 spheres. An unmoved mover is assumed for each sphere, including a prime mover for the sphere of fixed stars. The unmoved movers do not push the spheres (nor could they, they're insubstantial and dimensionless); rather, they're the final cause of the motion, meaning they explain it in a way that's similar to the explanation "the soul is moved by beauty". They simply "think about thinking", eternally without change, which is the idea of "being qua being" in Aristotle reformulation of Plato's theory.
While consistent with common human experience, Aristotle's principles were not based on controlled, quantitative experiments, so, while they account for many broad features of nature, they do not describe our universe in the precise, quantitative way we have more recently come to expect from science. Contemporaries of Aristotle like Aristarchus rejected these principles in favor of heliocentrism, but their ideas were not widely accepted. Aristotle's principles were difficult to disprove merely through casual everyday observation, but later development of the scientific method challenged his views with experiments, careful measurement, and more advanced technology such as the telescope and vacuum pump.

Elements

Aristotle taught that the elements which compose the Earth are different from the one that composes the heavens. He believed that four elements make up everything under the moon (the terrestrial): earth, air, fire and water. He also held that the heavens are made of a special, fifth element called "aether", which is weightless and "incorruptible" (which is to say, it doesn't change). Aether is also known by the name "quintessence"—literally, "fifth substance".
Page from an 1837 edition of Physica by the ancient Greek philosopher Aristotle—a book about a variety of subjects including the philosophy of nature and some topics within physics

He considered heavy substances such as iron and other metals to consist primarily of the element earth, with a smaller amount of the other three terrestrial elements. Other, lighter objects, he believed, have less earth, relative to the other three elements in their composition.

Motion

Aristotle held that each of the four terrestrial (or worldly) elements move toward their natural place, and that this natural motion would proceed unless hindered. For instance, because smoke is mainly air, it rises toward the sky but not as high as fire. He also taught that objects move against their natural motion only when forced (i.e. pushed) in a different direction and only while that force is being applied. This idea had flaws that were apparent to Aristotle and his contemporaries. It was questionable, for example, how an arrow would continue to fly forward after leaving the bowstring; which could no longer be forcing it forward. In response, Aristotle suggested the air behind an arrow in flight is thinned and the surrounding air, rushing in to fill that potential vacuum, is what pushes it forward. This was consistent with his explanation of a medium, such as air or water, causing resistance to the motion of an object passing through it. The turbulent motion of air around an arrow in flight is very complicated, and still not fully understood.
A vacuum, or void, is a place free of everything, and Aristotle argued against the possibility. Aristotle believed that the speed of an object's motion is proportional to the force being applied (or the object's weight in the case of natural motion) and inversely proportional to the viscosity of the medium; the more tenuous a medium is, the faster the motion. He reasoned that objects moving in a void, could move indefinitely fast and thus, the objects surrounding a void would immediately fill it before it could actually form.

Natural place

The Aristotelian explanation of gravity is that all bodies move toward their natural place. For the element earth, that place is the center of the (geocentric) universe, next comes the natural place of water (in a concentric shell around that of earth). The natural place of air is likewise a concentric shell surrounding the place of water. Sea level is between those two. Finally, the natural place of fire is higher than that of air but below the innermost celestial sphere, (the one carrying the Moon). Even at locations well above sea level, such as a mountain top, an object made mostly of the former two elements tends to fall and objects made mostly of the latter two tend to rise.

Medieval commentary

The theory of impetus was an auxiliary or secondary theory of Aristotelian dynamics, put forth initially to explain projectile motion against gravity. It was introduced by Jean Buridan (14th century), which became an ancestor to the concepts of inertia, momentum and acceleration in classical mechanics.

The problem of projectile motion in Aristotelian dynamics

Aristotelian dynamics presupposes that all motion against resistance requires a conjoined mover continuously to supply motive force. In cases of projectile motion, however, there is no apparent mover to counteract gravity. To resolve the problem of continued motion after contact is lost with the original projector, Aristotle tentatively suggested the auxiliary theory that the propellant is the medium through which the projectile travels. The medium was postulated to be endowed with an incorporeal motive force impressed within its parts by the original projector. In the theories described below, the motive force or "impetus" is instead regarded to be impressed directly within the projectile itself by the original projector and is not mediated by the medium through which the projectile moves.

Philoponan theory

In the 6th century, John Philoponus partly accepted Aristotle's theory that "continuation of motion depends on continued action of a force," but modified it to include his idea that the hurled body acquires a motive power or inclination for forced movement from the agent producing the initial motion and that this power secures the continuation of such motion. However, he argued that this impressed virtue was temporary; that it was a self-expending inclination, and thus the violent motion produced comes to an end, changing back into natural motion.

The Avicennan theory

In the 11th century, Avicenna discussed Philoponus' theory in The Book of Healing,in Physics IV.14 he says;
When we independently verify the issue (of projectile motion), we find the most correct doctrine is the doctrine of those who think that the moved object acquires an inclination from the mover
In the 12th century, Hibat Allah Abu'l-Barakat al-Baghdaadi adopted and modified Avicenna's theory on projectile motion. In his Kitab al-Mu'tabar, Abu'l-Barakat stated that the mover imparts a violent inclination (mayl qasri) on the moved and that this diminishes as the moving object distances itself from the mover. Jean Buridan and Albert of Saxony later refer to Abu'l-Barakat in explaining that the acceleration of a falling body is a result of its increasing impetus.

Buridan impetus

In the 14th century, Jean Buridan postulated the notion of motive force, which he named impetus.
When a mover sets a body in motion he implants into it a certain impetus, that is, a certain force enabling a body to move in the direction in which the mover starts it, be it upwards, downwards, sidewards, or in a circle. The implanted impetus increases in the same ratio as the velocity. It is because of this impetus that a stone moves on after the thrower has ceased moving it. But because of the resistance of the air (and also because of the gravity of the stone) which strives to move it in the opposite direction to the motion caused by the impetus, the latter will weaken all the time. Therefore the motion of the stone will be gradually slower, and finally the impetus is so diminished or destroyed that the gravity of the stone prevails and moves the stone towards its natural place. In my opinion one can accept this explanation because the other explanations prove to be false whereas all phenomenaa agree with this one.
Buridan gives his theory a mathematical value: impetus = weight x velocity
Buridan's pupil Dominicus de Clavasio in his 1357 De Caelo, as follows:
"When something moves a stone by violence, in addition to imposing on it an actual force, it impresses in it a certain impetus. In the same way gravity not only gives motion itself to a moving body, but also gives it a motive power and an impetus, ...".
Buridan's position was that a moving object would only be arrested by the resistance of the air and the weight of the body which would oppose its impetus. Buridan also maintained that impetus was proportional to speed; thus, his initial idea of impetus was similar in many ways to the modern concept of momentum. Despite the obvious similarities to more modern idea of momentum, Buridan saw his theory as only a modification to Aristotle's basic philosophy, maintaining many other peripatetic views, including the belief that there was still a fundamental difference between an object in motion and an object at rest. Buridan also maintained that impetus could be not only linear, but also circular in nature, causing objects (such as celestial bodies) to move in a circle.
In order to dispense with the need for positing continually moving intelligences or souls in the celestial spheres, which he pointed out are not posited by the Bible, he applied impetus theory to their endless rotation by extension of a terrestrial example of its application to rotary motion in the form of a rotating millwheel that continues rotating for a long time after the originally propelling hand is withdrawn, driven by the impetus impressed within it. He wrote on the celestial impetus of the spheres as follows:
"God, when He created the world, moved each of the celestial orbs as He pleased, and in moving them he impressed in them impetuses which moved them without his having to move them any more...And those impetuses which he impressed in the celestial bodies were not decreased or corrupted afterwards, because there was no inclination of the celestial bodies for other movements. Nor was there resistance which would be corruptive or repressive of that impetus."
However, having discounted the possibility of any resistance either due to a contrary inclination to move in any opposite direction or due to any external resistance, in concluding their impetus was therefore not corrupted by any resistance Buridan also discounted any inherent resistance to motion in the form of an inclination to rest within the spheres themselves, such as the inertia posited by Averroes and Aquinas. For otherwise that resistance would destroy their impetus, as the anti-Duhemian historian of science Annaliese Maier maintained the Parisian impetus dynamicists were forced to conclude because of their belief in an inherent inclinatio ad quietem or inertia in all bodies. But in fact contrary to that inertial variant of Aristotelian dynamics, according to Buridan prime matter does not resist motion. But this then raised the question within Aristotelian dynamics of why the motive force of impetus does not therefore move the spheres with infinite speed.
One impetus dynamics answer seemed to be that it was a secondary kind of motive force that produced uniform motion rather than infinite speed, just as it seemed Aristotle had supposed the spheres' moving souls do, or rather than producing uniformly accelerated motion like the primary force of gravity did by producing constantly increasing amounts of impetus. However in his Treatise on the heavens and the world in which the heavens are moved by inanimate inherent mechanical forces, Buridan's pupil Oresme offered an alternative Thomist inertial response to this problem in that he did posit a resistance to motion inherent in the heavens (i.e. in the spheres), but which is only a resistance to acceleration beyond their natural speed, rather than to motion itself, and was thus a tendency to preserve their natural speed. This analysis of the dynamics of the motions of the spheres seems to have been a first anticipation of Newton's revised conception of inertia as only resisting accelerated motion but not resisting uniform motion.
Buridan's thought was followed up by his pupil Albert of Saxony (1316–1390) and the Oxford Calculators. Their work in turn was elaborated by Nicole Oresme who pioneered the practice of demonstrating laws of motion in the form of graphs.
The tunnel experiment and oscillatory motion
The Buridan impetus theory developed one of the most important thought-experiments in the history of science, namely the so-called 'tunnel-experiment', so important because it brought oscillatory and pendulum motion within the pale of dynamical analysis and understanding in the science of motion for the very first time and thereby also established one of the important principles of classical mechanics. The pendulum was to play a crucially important role in the development of mechanics in the 17th century, and so more generally was the axiomatic principle of Galilean, Huygenian and Leibnizian dynamics to which the tunnel experiment also gave rise, namely that a body rises to the same height from which it has fallen, a principle of gravitational potential energy. As Galileo Galilei expressed this fundamental principle of his dynamics in his 1632 Dialogo:
"The heavy falling body acquires sufficient impetus [in falling from a given height] to carry it back to an equal height."
This imaginary experiment predicted that a cannonball dropped down a tunnel going straight through the centre of the Earth and out the other side would go past the centre and rise on the opposite surface to the same height from which it had first fallen on the other side, driven upwards past the centre by the gravitationally created impetus it had continually accumulated in falling downwards to the centre. This impetus would require a violent motion correspondingly rising to the same height past the centre for the now opposing force of gravity to destroy it all in the same distance which it had previously required to create it, and whereupon at this turning point the ball would then descend again and oscillate back and forth between the two opposing surfaces about the centre ad infinitum in principle. Thus the tunnel experiment provided the first dynamical model of oscillatory motion, albeit a purely imaginary one in the first instance, and specifically in terms of A-B impetus dynamics.
However, this thought-experiment was then most cunningly applied to the dynamical explanation of a real world oscillatory motion, namely that of the pendulum, as follows. The oscillating motion of the cannonball was dynamically assimilated to that of a pendulum bob by imagining it to be attached to the end of an immensely cosmologically long cord suspended from the vault of the fixed stars centred on the Earth, whereby the relatively short arc of its path through the enormously distant Earth was practically a straight line along the tunnel. Real world pendula were then conceived of as just micro versions of this 'tunnel pendulum', the macro-cosmological paradigmatic dynamical model of the pendulum, but just with far shorter cords and with their bobs oscillating above the Earth's surface in arcs corresponding to the tunnel inasmuch as their oscillatory mid-point was dynamically assimilated to the centre of the tunnel as the centre of the Earth.
Hence by means of such impressive literally 'lateral thinking', rather than the dynamics of pendulum motion being conceived of as the bob inexplicably somehow falling downwards compared to the vertical to a gravitationally lowest point and then inexplicably being pulled back up again on the same upper side of that point, rather it was its lateral horizontal motion that was conceived of as a case of gravitational free-fall followed by violent motion in a recurring cycle, with the bob repeatedly travelling through and beyond the motion's vertically lowest but horizontally middle point that stood proxy for the centre of the Earth in the tunnel pendulum. So on this imaginative lateral gravitational thinking outside the box the lateral motions of the bob first towards and then away from the normal in the downswing and upswing become lateral downward and upward motions in relation to the horizontal rather than to the vertical.
Thus whereas the orthodox Aristotelians could only see pendulum motion as a dynamical anomaly, as inexplicably somehow 'falling to rest with difficulty' as historian and philosopher of science Thomas Kuhn put it in his 1962 The Structure of Scientific Revolutions, on the impetus theory's novel analysis it was not falling with any dynamical difficulty at all in principle, but was rather falling in repeated and potentially endless cycles of alternating downward gravitationally natural motion and upward gravitationally violent motion. Hence, for example, Galileo was eventually to appeal to pendulum motion to demonstrate that the speed of gravitational free-fall is the same for all unequal weights precisely by virtue of dynamically modelling pendulum motion in this manner as a case of cyclically repeated gravitational free-fall along the horizontal in principle.
In fact the tunnel experiment, and hence pendulum motion, was an imaginary crucial experiment in favour of impetus dynamics against both orthodox Aristotelian dynamics without any auxiliary impetus theory, and also against Aristotelian dynamics with its H-P variant. For according to the latter two theories the bob cannot possibly pass beyond the normal. In orthodox Aristotelian dynamics there is no force to carry the bob upwards beyond the centre in violent motion against its own gravity that carries it to the centre, where it stops. And when conjoined with the Philoponus auxiliary theory, in the case where the cannonball is released from rest, again there is no such force because either all the initial upward force of impetus originally impressed within it to hold it in static dynamical equilibrium has been exhausted, or else if any remained it would be acting in the opposite direction and combine with gravity to prevent motion through and beyond the centre. Nor were the cannonball to be positively hurled downwards, and thus with a downward initial impetus, could it possibly result in an oscillatory motion. For although it could then possibly pass beyond the centre, it could never return to pass through it and rise back up again. For dynamically in this case although it would be logically possible for it to pass beyond the centre if when it reached it some of the constantly decaying downward impetus remained and still sufficiently much to be stronger than gravity to push it beyond the centre and upwards again, nevertheless when it eventually then became weaker than gravity, whereupon the ball would then be pulled back towards the centre by its gravity, it could not then pass beyond the centre to rise up again, because it would have no force directed against gravity to overcome it. For any possibly remaining impetus would be directed 'downwards' towards the centre, that is, in the same direction in which it was originally created.
Thus pendulum motion was dynamically impossible for both orthodox Aristotelian dynamics and also for H-P impetus dynamics on this 'tunnel model' analogical reasoning. But it was predicted by the impetus theory's tunnel prediction precisely because that theory posited that a continually accumulating downwards force of impetus directed towards the centre is acquired in natural motion, sufficient to then carry it upwards beyond the centre against gravity, and rather than only having an initially upwards force of impetus away from the centre as in the theory of natural motion. So the tunnel experiment constituted a crucial experiment between three alternative theories of natural motion.
On this analysis then impetus dynamics was to be preferred if the Aristotelian science of motion was to incorporate a dynamical explanation of pendulum motion. And indeed it was also to be preferred more generally if it was to explain other oscillatory motions, such as the to and fro vibrations around the normal of musical strings in tension, such as those of a zither, lute or guitar. For here the analogy made with the gravitational tunnel experiment was that the tension in the string pulling it towards the normal played the role of gravity, and thus when plucked i.e. pulled away from the normal and then released, this was the equivalent of pulling the cannonball to the Earth's surface and then releasing it. Thus the musical string vibrated in a continual cycle of the alternating creation of impetus towards the normal and its destruction after passing through the normal until this process starts again with the creation of fresh 'downward' impetus once all the 'upward' impetus has been destroyed.
This positing of a dynamical family resemblance of the motions of pendula and vibrating strings with the paradigmatic tunnel-experiment, the original mother of all oscillations in the history of dynamics, was one of the greatest imaginative developments of medieval Aristotelian dynamics in its increasing repertoire of dynamical models of different kinds of motion.
Shortly before Galileo's theory of impetus, Giambattista Benedetti modified the growing theory of impetus to involve linear motion alone:
"…[Any] portion of corporeal matter which moves by itself when an impetus has been impressed on it by any external motive force has a natural tendency to move on a rectilinear, not a curved, path."
Benedetti cites the motion of a rock in a sling as an example of the inherent linear motion of objects, forced into circular motion.

Medieval age
 
In the Middle Ages, Aristotle's theories were criticized and modified by a number of figures, beginning with John Philoponus in the 6th century. A central problem was that of projectile motion, which was discussed by Hipparchus and Philoponus. This led to the development of the theory of impetus by 14th century French Jean Buridan, which developed into the modern theories of inertia, velocity, acceleration and momentum. This work and others was developed in 14th century England by the Oxford Calculators such as Thomas Bradwardine, who studied and formulated various laws regarding falling bodies.
On the question of a body subject to a constant (uniform) force, the 12th century Jewish-Arab Nathanel (Iraqi, of Baghdad) stated that constant force imparts constant acceleration, while the main properties are uniformly accelerated motion (as of falling bodies) was worked out by the 14th century Oxford Calculators.

Early modern age

Two central figures in the early modern age are Galileo Galilei and Isaac Newton. Galileo's final statement of his mechanics, particularly of falling bodies, is his Two New Sciences (1638). Newton's 1687 Philosophiæ Naturalis Principia Mathematica provided a detailed mathematical account of mechanics, using the newly developed mathematics of calculus and providing the basis of Newtonian mechanics.
There is some dispute over priority of various ideas: Newton's Principia is certainly the seminal work and has been tremendously influential, and the systematic mathematics therein did not and could not have been stated earlier because calculus had not been developed. However, many of the ideas, particularly as pertain to inertia (impetus) and falling bodies had been developed and stated by earlier researchers, both the then-recent Galileo and the less-known medieval predecessors. Precise credit is at times difficult or contentious because scientific language and standards of proof changed, so whether medieval statements are equivalent to modern statements or sufficient proof, or instead similar to modern statements and hypotheses is often debatable.

Modern age

Two main modern developments in mechanics are general relativity of Einstein, and quantum mechanics, both developed in the 20th century based in part on earlier 19th century ideas.

Types of mechanical bodies

Thus the often-used term body needs to stand for a wide assortment of objects, including particles, projectiles, spacecraft, stars, parts of machinery, parts of solids, parts of fluids (gases and liquids), etc.
Other distinctions between the various sub-disciplines of mechanics, concern the nature of the bodies being described. Particles are bodies with little (known) internal structure, treated as mathematical points in classical mechanics. Rigid bodies have size and shape, but retain a simplicity close to that of the particle, adding just a few so-called degrees of freedom, such as orientation in space.
Otherwise, bodies may be semi-rigid, i.e. elastic, or non-rigid, i.e. fluid. These subjects have both classical and quantum divisions of study.
For instance, the motion of a spacecraft, regarding its orbit and attitude (rotation), is described by the relativistic theory of classical mechanics, while the analogous movements of an atomic nucleus are described by quantum mechanics.

Sub-disciplines in mechanics

The following are two lists of various subjects that are studied in mechanics.
Note that there is also the "theory of fields" which constitutes a separate discipline in physics, formally treated as distinct from mechanics, whether classical fields or quantum fields. But in actual practice, subjects belonging to mechanics and fields are closely interwoven. Thus, for instance, forces that act on particles are frequently derived from fields (electromagnetic or gravitational), and particles generate fields by acting as sources. In fact, in quantum mechanics, particles themselves are fields, as described theoretically by the wave function.

Classical mechanics

The following are described as forming Classical mechanics:
  • Newtonian mechanics, the original theory of motion (kinematics) and forces (dynamics)
  • Hamiltonian mechanics, a theoretical formalism, based on the principle of conservation of energy
  • Lagrangian mechanics, another theoretical formalism, based on the principle of the least action
  • Celestial mechanics, the motion of heavenly bodies: planets, comets, stars, galaxies, etc.
  • Astrodynamics, spacecraft navigation, etc.
  • Solid mechanics, elasticity, the properties of deformable bodies.
  • Fracture mechanics
  • Acoustics, sound ( = density variation propagation) in solids, fluids and gases.
  • Statics, semi-rigid bodies in mechanical equilibrium
  • Fluid mechanics, the motion of fluids
  • Soil mechanics, mechanical behavior of soils
  • Continuum mechanics, mechanics of continua (both solid and fluid)
  • Hydraulics, mechanical properties of liquids
  • Fluid statics, liquids in equilibrium
  • Applied mechanics, or Engineering mechanics
  • Biomechanics, solids, fluids, etc. in biology
  • Biophysics, physical processes in living organisms
  • Statistical mechanics, assemblies of particles too large to be described in a deterministic way
  • Relativistic or Einsteinian mechanics, universal gravitation


In physics, classical mechanics is one of the two major sub-fields of mechanics, which is concerned with the set of physical laws describing the motion of bodies under the action of a system of forces. The study of the motion of bodies is an ancient one, making classical mechanics one of the oldest and largest subjects in science, engineering and technology.
Classical mechanics describes the motion of macroscopic objects, from projectiles to parts of machinery, as well as astronomical objects, such as spacecraft, planets, stars, and galaxies. Besides this, many specializations within the subject deal with gases, liquids, solids, and other specific sub-topics. Classical mechanics provides extremely accurate results as long as the domain of study is restricted to large objects and the speeds involved do not approach the speed of light. When the objects being dealt with become sufficiently small, it becomes necessary to introduce the other major sub-field of mechanics, quantum mechanics, which reconciles the macroscopic laws of physics with the atomic nature of matter and handles the wave-particle duality of atoms and molecules. In the case of high velocity objects approaching the speed of light, classical mechanics is enhanced by special relativity. General relativity unifies special relativity with Newton's law of universal gravitation, allowing physicists to handle gravitation at a deeper level.
The term classical mechanics was coined in the early 20th century to describe the system of physics begun by Isaac Newton and many contemporary 17th century natural philosophers, building upon the earlier astronomical theories of Johannes Kepler, which in turn were based on the precise observations of Tycho Brahe and the studies of terrestrial projectile motion of Galileo. Because these aspects of physics were developed long before the emergence of quantum physics and relativity, some sources exclude Einstein's theory of relativity from this category. However, a number of modern sources do include relativistic mechanics, which in their view represents classical mechanics in its most developed and most accurate form.
The initial stage in the development of classical mechanics is often referred to as Newtonian mechanics, and is associated with the physical concepts employed by and the mathematical methods invented by Newton himself, in parallel with Leibniz, and others. This is further described in the following sections. Later, more abstract and general methods were developed, leading to reformulations of classical mechanics known as Lagrangian mechanics and Hamiltonian mechanics. These advances were largely made in the 18th and 19th centuries, and they extend substantially beyond Newton's work, particularly through their use of analytical mechanics. Ultimately, the mathematics developed for these were central to the creation of quantum mechanics.

Description of the theory

The following introduces the basic concepts of classical mechanics. For simplicity, it often models
The analysis of projectile motion is a part of classical mechanics
real-world objects as point particles, objects with negligible size. The motion of a point particle is characterized by a small number of parameters: its position, mass, and the forces applied to it. Each of these parameters is discussed in turn.
In reality, the kind of objects that classical mechanics can describe always have a non-zero size. (The physics of very small particles, such as the electron, is more accurately described by quantum mechanics). Objects with non-zero size have more complicated behavior than hypothetical point particles, because of the additional degrees of freedom—for example, a baseball can spin while it is moving. However, the results for point particles can be used to study such objects by treating them as composite objects, made up of a large number of interacting point particles. The center of mass of a composite object behaves like a point particle.

 

Position and its derivatives

The SI derived "mechanical"
(that is, not electromagnetic or thermal)
units with kg, m and s
Position m
Angular position/Angle unitless (radian)
velocity m s−1
Angular velocity s−1
acceleration m s−2
Angular acceleration s−2
jerk m s−3
"Angular jerk" s−3
specific energy m2 s−2
absorbed dose rate m2 s−3
moment of inertia kg m2
momentum kg m s−1
angular momentum kg m2 s−1
force kg m s−2
torque kg m2 s−2
energy kg m2 s−2
power kg m2 s−3
pressure and energy density kg m−1 s−2
surface tension kg s−2
Spring constant kg s−2
irradiance and energy flux kg s−3
kinematic viscosity m2 s−1
dynamic viscosity kg m−1 s−1
Density(mass density) kg m−3
Density(weight density) kg m−2 s−2
Number density m−3
Action kg m2 s−1
The position of a point particle is defined with respect to an arbitrary fixed reference point, O, in space, usually accompanied by a coordinate system, with the reference point located at the origin of the coordinate system. It is defined as the vector r from O to the particle. In general, the point particle need not be stationary relative to O, so r is a function of t, the time elapsed since an arbitrary initial time. In pre-Einstein relativity (known as Galilean relativity), time is considered an absolute, i.e., the time interval between any given pair of events is the same for all observers. In addition to relying on absolute time, classical mechanics assumes Euclidean geometry for the structure of space.

Velocity and speed

The velocity, or the rate of change of position with time, is defined as the derivative of the position with respect to time or
\mathbf{v} = {\mathrm{d}\mathbf{r} \over \mathrm{d}t}\,\!.
In classical mechanics, velocities are directly additive and subtractive. For example, if one car traveling East at 60 km/h passes another car traveling East at 50 km/h, then from the perspective of the slower car, the faster car is traveling east at 60 − 50 = 10 km/h. Whereas, from the perspective of the faster car, the slower car is moving 10 km/h to the West. Velocities are directly additive as vector quantities; they must be dealt with using vector analysis.
Mathematically, if the velocity of the first object in the previous discussion is denoted by the vector u = ud and the velocity of the second object by the vector v = ve, where u is the speed of the first object, v is the speed of the second object, and d and e are unit vectors in the directions of motion of each particle respectively, then the velocity of the first object as seen by the second object is
\mathbf{u}' = \mathbf{u} - \mathbf{v} \, .
Similarly,
\mathbf{v'}= \mathbf{v} - \mathbf{u} \, .
When both objects are moving in the same direction, this equation can be simplified to
\mathbf{u}' = ( u - v ) \mathbf{d} \, .
Or, by ignoring direction, the difference can be given in terms of speed only:
u' = u - v \, .

Acceleration

The acceleration, or rate of change of velocity, is the derivative of the velocity with respect to time (the second derivative of the position with respect to time) or
\mathbf{a} = {\mathrm{d}\mathbf{v} \over \mathrm{d}t}.
Acceleration can arise from a change with time of the magnitude of the velocity or of the direction of the velocity or both. If only the magnitude v of the velocity decreases, this is sometimes referred to as deceleration, but generally any change in the velocity with time, including deceleration, is simply referred to as acceleration.

Frames of reference

While the position and velocity and acceleration of a particle can be referred to any observer in any state of motion, classical mechanics assumes the existence of a special family of reference frames in terms of which the mechanical laws of nature take a comparatively simple form. These special reference frames are called inertial frames. An inertial frame is such that when an object without any force interactions(an idealized situation) is viewed from it, it will appear either to be at rest or in a state of uniform motion in a straight line. This is the fundamental definition of an inertial frame. They are characterized by the requirement that all forces entering the observer's physical laws originate in identifiable sources (charges, gravitational bodies, and so forth). A non-inertial reference frame is one accelerating with respect to an inertial one, and in such a non-inertial frame a particle is subject to acceleration by fictitious forces that enter the equations of motion solely as a result of its accelerated motion, and do not originate in identifiable sources. These fictitious forces are in addition to the real forces recognized in an inertial frame. A key concept of inertial frames is the method for identifying them. For practical purposes, reference frames that are unaccelerated with respect to the distant stars are regarded as good approximations to inertial frames.
Consider two reference frames S and S' . For observers in each of the reference frames an event has space-time coordinates of (x,y,z,t) in frame S and (x′,y′,z′,t′) in frame S′. Assuming time is measured the same in all reference frames, and if we require x = x' when t = 0, then the relation between the space-time coordinates of the same event observed from the reference frames S′ and S, which are moving at a relative velocity of u in the x direction is:
x′ = xut
y′ = y
z′ = z
t′ = t
This set of formulas defines a group transformation known as the Galilean transformation (informally, the Galilean transform). This group is a limiting case of the Poincaré group used in special relativity. The limiting case applies when the velocity u is very small compared to c, the speed of light.
The transformations have the following consequences:
  • v′ = vu (the velocity v′ of a particle from the perspective of S′ is slower by u than its velocity v from the perspective of S)
  • a′ = a (the acceleration of a particle is the same in any inertial reference frame)
  • F′ = F (the force on a particle is the same in any inertial reference frame)
  • the speed of light is not a constant in classical mechanics, nor does the special position given to the speed of light in relativistic mechanics have a counterpart in classical mechanics.
For some problems, it is convenient to use rotating coordinates (reference frames). Thereby one can either keep a mapping to a convenient inertial frame, or introduce additionally a fictitious centrifugal force and Coriolis force.

Forces; Newton's second law

Newton was the first to mathematically express the relationship between force and momentum. Some physicists interpret Newton's second law of motion as a definition of force and mass, while others consider it to be a fundamental postulate, a law of nature. Either interpretation has the same mathematical consequences, historically known as "Newton's Second Law":
\mathbf{F} = {\mathrm{d}\mathbf{p} \over \mathrm{d}t} = {\mathrm{d}(m \mathbf{v}) \over \mathrm{d}t}.
The quantity mv is called the (canonical) momentum. The net force on a particle is thus equal to rate change of momentum of the particle with time. Since the definition of acceleration is a = dv/dt, the second law can be written in the simplified and more familiar form:
\mathbf{F} = m \mathbf{a} \, .
So long as the force acting on a particle is known, Newton's second law is sufficient to describe the motion of a particle. Once independent relations for each force acting on a particle are available, they can be substituted into Newton's second law to obtain an ordinary differential equation, which is called the equation of motion.
As an example, assume that friction is the only force acting on the particle, and that it may be modeled as a function of the velocity of the particle, for example:
\mathbf{F}_{\rm R} = - \lambda \mathbf{v} \, ,
where λ is a positive constant. Then the equation of motion is
- \lambda \mathbf{v} = m \mathbf{a} = m {\mathrm{d}\mathbf{v} \over \mathrm{d}t} \, .
This can be integrated to obtain
\mathbf{v} = \mathbf{v}_0 e^{- \lambda t / m}
where v0 is the initial velocity. This means that the velocity of this particle decays exponentially to zero as time progresses. In this case, an equivalent viewpoint is that the kinetic energy of the particle is absorbed by friction (which converts it to heat energy in accordance with the conservation of energy), slowing it down. This expression can be further integrated to obtain the position r of the particle as a function of time.
Important forces include the gravitational force and the Lorentz force for electromagnetism. In addition, Newton's third law can sometimes be used to deduce the forces acting on a particle: if it is known that particle A exerts a force F on another particle B, it follows that B must exert an equal and opposite reaction force, −F, on A. The strong form of Newton's third law requires that F and −F act along the line connecting A and B, while the weak form does not. Illustrations of the weak form of Newton's third law are often found for magnetic forces.

Work and energy

If a constant force F is applied to a particle that achieves a displacement Δr, the work done by the force is defined as the scalar product of the force and displacement vectors:
 W = \mathbf{F} \cdot \Delta \mathbf{r} \, .
More generally, if the force varies as a function of position as the particle moves from r1 to r2 along a path C, the work done on the particle is given by the line integral
 W = \int_C \mathbf{F}(\mathbf{r}) \cdot \mathrm{d}\mathbf{r} \, .
If the work done in moving the particle from r1 to r2 is the same no matter what path is taken, the force is said to be conservative. Gravity is a conservative force, as is the force due to an idealized spring, as given by Hooke's law. The force due to friction is non-conservative.
The kinetic energy Ek of a particle of mass m travelling at speed v is given by
E_k = \tfrac{1}{2}mv^2 \, .
For extended objects composed of many particles, the kinetic energy of the composite body is the sum of the kinetic energies of the particles.
The work-energy theorem states that for a particle of constant mass m the total work W done on the particle from position r1 to r2 is equal to the change in kinetic energy Ek of the particle:
W = \Delta E_k = E_{k,2} - E_{k,1} = \tfrac{1}{2}m\left(v_2^{\, 2} - v_1^{\, 2}\right) \, .
Conservative forces can be expressed as the gradient of a scalar function, known as the potential energy and denoted Ep:
\mathbf{F} = - \mathbf{\nabla} E_p \, .
If all the forces acting on a particle are conservative, and Ep is the total potential energy (which is defined as a work of involved forces to rearrange mutual positions of bodies), obtained by summing the potential energies corresponding to each force
\mathbf{F} \cdot \Delta \mathbf{r} = - \mathbf{\nabla} E_p \cdot \Delta \mathbf{r} = - \Delta E_p
 \Rightarrow - \Delta E_p = \Delta E_k \Rightarrow \Delta (E_k + E_p) = 0 \, .
This result is known as conservation of energy and states that the total energy,
\sum E = E_k + E_p \, .
is constant in time. It is often useful, because many commonly encountered forces are conservative.

Beyond Newton's Laws

Classical mechanics also includes descriptions of the complex motions of extended non-pointlike objects. Euler's laws provide extensions to Newton's laws in this area. The concepts of angular momentum rely on the same calculus used to describe one-dimensional motion. The rocket equation extends the notion of rate of change of an object's momentum to include the effects of an object "losing mass".
There are two important alternative formulations of classical mechanics: Lagrangian mechanics and Hamiltonian mechanics. These, and other modern formulations, usually bypass the concept of "force", instead referring to other physical quantities, such as energy, for describing mechanical systems.
The expressions given above for momentum and kinetic energy are only valid when there is no significant electromagnetic contribution. In electromagnetism, Newton's second law for current-carrying wires breaks down unless one includes the electromagnetic field contribution to the momentum of the system as expressed by the Poynting vector divided by c2, where c is the speed of light in free space.

Limits of validity

Domain of validity for Classical Mechanics
Many branches of classical mechanics are simplifications or approximations of more accurate forms; two
of the most accurate being general relativity and relativistic statistical mechanics. Geometric optics is an approximation to the quantum theory of light, and does not have a superior "classical" form.

The Newtonian approximation to special relativity

In special relativity, the momentum of a particle is given by
\mathbf{p} = \frac{m \mathbf{v}}{ \sqrt{1-v^2/c^2}} \, ,
where m is the particle's mass, v its velocity, and c is the speed of light.
If v is very small compared to c, v2/c2 is approximately zero, and so
\mathbf{p} \approx m\mathbf{v} \, .
Thus the Newtonian equation p = mv is an approximation of the relativistic equation for bodies moving with low speeds compared to the speed of light.
For example, the relativistic cyclotron frequency of a cyclotron, gyrotron, or high voltage magnetron is given by
f=f_c\frac{m_0}{m_0+T/c^2} \, ,
where fc is the classical frequency of an electron (or other charged particle) with kinetic energy T and (rest) mass m0 circling in a magnetic field. The (rest) mass of an electron is 511 keV. So the frequency correction is 1% for a magnetic vacuum tube with a 5.11 kV direct current accelerating voltage.

The classical approximation to quantum mechanics

The ray approximation of classical mechanics breaks down when the de Broglie wavelength is not much smaller than other dimensions of the system. For non-relativistic particles, this wavelength is
\lambda=\frac{h}{p}
where h is Planck's constant and p is the momentum.
Again, this happens with electrons before it happens with heavier particles. For example, the electrons used by Clinton Davisson and Lester Germer in 1927, accelerated by 54 volts, had a wave length of 0.167 nm, which was long enough to exhibit a single diffraction side lobe when reflecting from the face of a nickel crystal with atomic spacing of 0.215 nm. With a larger vacuum chamber, it would seem relatively easy to increase the angular resolution from around a radian to a milliradian and see quantum diffraction from the periodic patterns of integrated circuit computer memory.
More practical examples of the failure of classical mechanics on an engineering scale are conduction by quantum tunneling in tunnel diodes and very narrow transistor gates in integrated circuits.
Classical mechanics is the same extreme high frequency approximation as geometric optics. It is more often accurate because it describes particles and bodies with rest mass. These have more momentum and therefore shorter De Broglie wavelengths than massless particles, such as light, with the same kinetic energies.

Branches

Classical mechanics was traditionally divided into three main branches:

  • Statics, the study of equilibrium and its relation to forces
  • Dynamics, the study of motion and its relation to forces
  • Kinematics, dealing with the implications of observed motions without regard for circumstances causing them
Another division is based on the choice of mathematical formalism:
  • Newtonian mechanics
  • Lagrangian mechanics
  • Hamiltonian mechanics
Alternatively, a division can be made by region of application:
  • Celestial mechanics, relating to stars, planets and other celestial bodies
  • Continuum mechanics, for materials which are modelled as a continuum, e.g., solids and fluids (i.e., liquids and gases).
  • Relativistic mechanics (i.e. including the special and general theories of relativity), for bodies whose speed is close to the speed of light.
  • Statistical mechanics, which provides a framework for relating the microscopic properties of individual atoms and molecules to the macroscopic or bulk thermodynamic properties of materials.

Quantum mechanics

The following are categorized as being part of Quantum mechanics:
  • Particle physics, the motion, structure, and reactions of particles
  • Nuclear physics, the motion, structure, and reactions of nuclei
  • Condensed matter physics, quantum gases, solids, liquids, etc.
  • Quantum statistical mechanics, large assemblies of particles

Professional organizations

  • Applied Mechanics Division, American Society of Mechanical Engineers
  • Fluid Dynamics Division, American Physical Society
  • Institution of Mechanical Engineers is the United Kingdom's qualifying body for Mechanical Engineers and has been the home of Mechanical Engineers for over 150 years.
  • International Union of Theoretical and Applied Mechanics

 

Analytical mechanics

Analytical mechanics is a term used for a refined, highly mathematical form of classical mechanics, constructed from the 18th century onwards as a formulation of the subject as founded by Isaac Newton. Often the term vectorial mechanics is applied to the form based on Newton's work, to contrast it with analytical mechanics. This distinction makes sense because analytical mechanics uses two scalar properties of motion, the kinetic and potential energies, instead of vector forces, to analyze the motion.
The subject has two parts: Lagrangian mechanics and Hamiltonian mechanics. The Lagrangian formulation identifies the actual path followed by the motion as a selection of the path over which the time integral of kinetic energy is least, assuming the total energy to be fixed, and imposing no conditions on the time of transit. The Hamiltonian formulation is more general, allowing time-varying energy, identifying the path followed to be the one with least action (the integral over the path of the difference between kinetic and potential energies), holding the departure and arrival times fixed. These approaches underlie the path integral formulation of quantum mechanics.
It began with d'Alembert's principle. By analogy with Fermat's principle, which is the variational principle in geometric optics, Maupertuis' principle was discovered in classical mechanics.
Using generalized coordinates, we obtain Lagrange's equations. Using the Legendre transformation, we obtain generalized momentum and the Hamiltonian.
Hamilton's canonical equations provides integral, while Lagrange's equation provides differential equations. Finally we may derive the Hamilton–Jacobi equation.
The study of the solutions of the Hamilton-Jacobi equations leads naturally to the study of symplectic manifolds and symplectic topology. In this formulation, the solutions of the Hamilton–Jacobi equations are the integral curves of Hamiltonian vector fields.