Please be careful and read the information at the link listed below.
Before you get swept up in thought of sourcing products from China to make a profit by reselling, check out the resources online regarding this topic. If you do a google search on “importing chinese products” you will come up with roughly 4.3 million hits. You can change the search words to find out more about online scams and fraud regarding chinese goods. It can be both a nebulous and daunting task to be sure, so it’s always best to research first and be very cautious. Often we jump some steps in the hype and excitement of the online mania or frenzy if you will.
Hopefully you find it interesting & helpful, please comment as well and perhaps we can start a thread on this topic to educate as many people as possible.
This is just one of the millions of links that popped out for a starting point on the subject. When I get a chance perhaps I will add more on this subject and provide links to more online resources.
Important imformation for Importers
130 years ago the World’s first electric car was built by Victorian inventor in 1884 …
Thomas Parker: World’s first electric car – built by Victorian inventor in 1884. Thomas Parker is in the light suit in the front of the car.
First practical electric car, built by Thomas Parker in 1884.
Rechargeable batteries that provided a viable means for storing electricity on board a vehicle did not come into being until 1859, with the invention of the lead-acid battery by French physicist Gaston Planté.
Thomas Parker, responsible for innovations such as electrifying the London Underground, overhead tramways in Liverpool and Birmingham, built the first practical production electric car in London in 1884, using his own specially designed high-capacity rechargeable batteries. Parker’s long-held interest in the construction of more fuel-efficient vehicles led him to experiment with electric vehicles. He also may have been concerned about the malign effects smoke and pollution were having in London.
An alternative contender as the world’s first electric car was the German Flocken Elektrowagen, built in 1888.
Electric cars were reasonably popular in the late 19th century and early 20th century, when electricity was among the preferred methods for automobile propulsion, providing a level of comfort and ease of operation that could not be achieved by the gasoline cars of the time. Advances in internal combustion technology, especially the electric starter, soon lessened the relative advantages of the electric car. The greater range of gasoline cars, and their much quicker refueling times, encouraged a rapid expansion of petroleum infrastructure, which quickly proved decisive. The mass production of gasoline-powered vehicles, by companies such as the Ford Motor Company, reduced prices of gasoline-engined cars to less than half that of equivalent electric cars, and that inevitably led to a decline in the use of electric propulsion, effectively removing it from the automobile market by the early 1930s.
So what has changed? What makes the Tesla so different then the electric vehicles of the 1930’s and all the reasons that made them disappear. What has changed now in 2014 to make them survive? Do you think Tesla will pull this off 130 years after this invention came out?
Long Live Vacuum Tube Amps
Mar 17, 2014 by Lou Frenzel in Communiqué
Vacuum tube amplifiers just won’t go away. I am speaking more of audio vacuum tube amps than I am of microwave amps like magnetrons, klystrons, TWTs and the like. Most other audio gear is solid state so why are there still vacuum tube amps? My grandson asked me that recently and it was hard to explain this phenomenon. What I basically said is that vacuum tubes amps sound better than solid state amps, to some people. I had no way to follow up or demo this effect.
I have actually compared solid state audio power amps to the vacuum tube equivalents several times and using the same speakers. (It seems to me that the speakers would have more of an effect on the sound than the type of amplifier.) I could discern a difference between the two. I do not have the words to describe the difference. It is akin to comparing wines in a tasting. There are words for that but they are also vague and subjective to be sure. So it is with audio sounds. I have actually heard people say they can tell the difference between two different sets of speaker cables and connectors. I still don’t believe it.
So are vacuum tubes amps better? I’m not sure. They do still sound very good and for me it also depends on the music being played. Guitar players almost universally favor vacuum tube amplifiers. There are certainly enough vacuum tube audio power amp manufacturers to support the niche. I ran across one called Frenzel Tube Amps in Texas. No relation to me. These guys build custom amps for audio systems and musicians. And there are a dozen or so other tube amp companies. Amazing.
Not only that, I recently discovered a new book Building Valve Amplifiers, 2nd edition by Morgan Jones. It is a highly detailed book on the actual design and construction of tube amplifiers. Published by Newnes/Elsevier, the book covers planning, metalworking, wiring and testing. A real nitty gritty book for hobbyists and serious manufacturers. For example, the book details things like how to orient audio and power transformers to avoid problems of magnetic flux leakage from affecting other transformers or the tubes themselves. The test section is excellent. You may even learn where to find a loctal socket for a 7N7.
Incidentally the book is a companion to the book Valve Amplifiers, 4th edition also by Morgan Jones and published by Newnes/Elsevier. This is a serious design book with details on audio circuit design, equations and related topics. A 4th edition means that the book has been around for a while and is being updated and there is a real market for it.
Anyway, I no long have any tubes or tube equipment around. Very early in my career I worked as an engineer in industrial electronics and I recall that I could make almost anything I needed with a 12AU7, 12AX7 and/or a relay. The early germanium PNP 2N1305s did not cut it. Those days are gone for good. And even my ham gear is solid state although one can still buy RF power amps with multiple kinds of vacuum tubes. It is hard to beat them for power RF in the HF range. LDMOS amps are available too but more expensive. And I suspect we will see some GaN ham power amps at modest prices in the near future. But I am not betting on the demise of the vacuum tube.
Field-programmable gate array
A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturing—hence “field-programmable“. The FPGA configuration is generally specified using a hardware description language (HDL), similar to that used for an application-specific integrated circuit (ASIC) (circuit diagrams were previously used to specify the configuration, as they were for ASICs, but this is increasingly rare).
Contemporary FPGAs have large resources of logic gates and RAM blocks to implement complex digital computations. As FPGA designs employ very fast I/Os and bidirectional data buses it becomes a challenge to verify correct timing of valid data within setup time and hold time. Floor planning enables resources allocation within FPGA to meet these time constraints. FPGAs can be used to implement any logical function that an ASIC could perform. The ability to update the functionality after shipping, partial re-configuration of a portion of the design and the low non-recurring engineering costs relative to an ASIC design (notwithstanding the generally higher unit cost), offer advantages for many applications.
FPGAs contain programmable logic components called “logic blocks”, and a hierarchy of reconfigurable interconnects that allow the blocks to be “wired together”—somewhat like many (changeable) logic gates that can be inter-wired in (many) different configurations. Logic blocks can be configured to perform complex combinational functions, or merely simple logic gates like AND and XOR. In most FPGAs, the logic blocks also include memory elements, which may be simple flip-flops or more complete blocks of memory.
Some FPGAs have analog features in addition to digital functions. The most common analog feature is programmable slew rate and drive strength on each output pin, allowing the engineer to set slow rates on lightly loaded pins that would otherwise ring unacceptably, and to set stronger, faster rates on heavily loaded pins on high-speed channels that would otherwise run too slowly. Another relatively common analog feature is differential comparators on input pins designed to be connected to differential signaling channels. A few “mixed signal FPGAs” have integrated peripheral analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) with analog signal conditioning blocks allowing them to operate as a system-on-a-chip. Such devices blur the line between an FPGA, which carries digital ones and zeros on its internal programmable interconnect fabric, andfield-programmable analog array (FPAA), which carries analog values on its internal programmable interconnect fabric.
- 1 History
- 2 FPGA comparisons
- 3 Applications
- 4 Architecture
- 5 FPGA design and programming
- 6 Basic process technology types
- 7 Major manufacturers
- 8 See also
- 9 References
- 10 Further reading
- 11 External links
The FPGA industry sprouted from programmable read-only memory (PROM) and programmable logic devices (PLDs). PROMs and PLDs both had the option of being programmed in batches in a factory or in the field (field programmable). However programmable logic was hard-wired between logic gates.
In the late 1980s the Naval Surface Warfare Department funded an experiment proposed by Steve Casselman to develop a computer that would implement 600,000 reprogrammable gates. Casselman was successful and a patent related to the system was issued in 1992.
Some of the industry’s foundational concepts and technologies for programmable logic arrays, gates, and logic blocks are founded in patents awarded to David W. Page and LuVerne R. Peterson in 1985.
Xilinx co-founders Ross Freeman and Bernard Vonderschmitt invented the first commercially viable field programmable gate array in 1985 – the XC2064. The XC2064 had programmable gates and programmable interconnects between gates, the beginnings of a new technology and market. The XC2064 boasted a mere 64 configurable logic blocks (CLBs), with two 3-input lookup tables (LUTs). More than 20 years later, Freeman was entered into the National Inventors Hall of Fame for his invention.
Xilinx continued unchallenged and quickly growing from 1985 to the mid-1990s, when competitors sprouted up, eroding significant market-share. By 1993, Actel was serving about 18 percent of the market.
The 1990s were an explosive period of time for FPGAs, both in sophistication and the volume of production. In the early 1990s, FPGAs were primarily used in telecommunications and networking. By the end of the decade, FPGAs found their way into consumer, automotive, and industrial applications.
A recent trend has been to take the coarse-grained architectural approach a step further by combining the logic blocks and interconnects of traditional FPGAs with embedded microprocessors and related peripherals to form a complete “system on a programmable chip”. This work mirrors the architecture by Ron Perlof and Hana Potash of Burroughs Advanced Systems Group which combined a reconfigurable CPU architecture on a single chip called the SB24. That work was done in 1982. Examples of such hybrid technologies can be found in the Xilinx Zynq™-7000 All Programmable SoC, which includes a 1.0 GHz dual-core ARM Cortex-A9 MPCore processor embedded within the FPGA’s logic fabric or in the Altera Arria V FPGA which includes a 800 MHz dual-core ARM Cortex-A9 MPCore. The Atmel FPSLIC is another such device, which uses an AVR processor in combination with Atmel’s programmable logic architecture. The Actel SmartFusion devices incorporate an ARM Cortex-M3 hard processor core (with up to 512 kB of flash and 64 kB of RAM) and analog peripherals such as a multi-channel ADC and DACs to their flash-based FPGA fabric.
In 2010, Xilinx Inc introduced the first All Programmable System on a Chip branded Zynq™-7000 that fused features of an ARM high-end microcontroller (hard-core implementations of a 32-bit processor, memory, and I/O) with an FPGA fabric to make FPGAs easier for embedded designers to use. By incorporating the ARM processor-based platform into a 28 nm FPGA family, the extensible processing platform enables system architects and embedded software developers to apply a combination of serial and parallel processing to their embedded system designs, for which the general trend has been to progressively increasing complexity. The high level of integration helps to reduce power consumption and dissipation, and the reduced parts count vs. using an FPGA with a separate CPU chip leads to a lower parts cost, a smaller system, and higher reliability since most failures in modern electronics occur on PCBs in the connections between chips instead of within the chips themselves.
An alternate approach to using hard-macro processors is to make use of soft processor cores that are implemented within the FPGA logic. Nios II, MicroBlaze and Mico32 are examples of popular softcore processors.
As previously mentioned, many modern FPGAs have the ability to be reprogrammed at “run time,” and this is leading to the idea of reconfigurable computing or reconfigurable systems — CPUs that reconfigure themselves to suit the task at hand.
Additionally, new, non-FPGA architectures are beginning to emerge. Software-configurable microprocessors such as the Stretch S5000 adopt a hybrid approach by providing an array of processor cores and FPGA-like programmable cores on the same chip.
- 1982: 8192 gates, Burroughs Advances Systems Group, integrated into the S-Type 24 bit processor for reprogrammable I/O.
- 1987: 9,000 gates, Xilinx
- 1992: 600,000, Naval Surface Warfare Department
- Early 2000s: Millions 
- 1985: First commercial FPGA : Xilinx XC2064 
- 1987: $14 million
- ~1993: >$385 million
- 2005: $1.9 billion
- 2010 estimates: $2.75 billion 
FPGA design starts
Historically, FPGAs have been slower, less energy efficient and generally achieved less functionality than their fixed ASIC counterparts. An older study had shown that designs implemented on FPGAs need on average 40 times as much area, draw 12 times as much dynamic power, and run at one third the speed of corresponding ASIC implementations. More recently, FPGAs such as the Xilinx Virtex-7 or the Altera Stratix 5 have come to rival corresponding ASIC and ASSP solutions by providing significantly reduced power, increased speed, lower materials cost, minimal implementation real-estate, and increased possibilities for re-configuration ‘on-the-fly’. Where previously a design may have included 6 to 10 ASICs, the same design can now be achieved using only one FPGA.
Advantages include the ability to re-program in the field to fix bugs, and may include a shorter time to market and lower non-recurring engineering costs. Vendors can also take a middle road by developing their hardware on ordinary FPGAs, but manufacture their final version as an ASIC so that it can no longer be modified after the design has been committed.
Xilinx claims that several market and technology dynamics are changing the ASIC/FPGA paradigm:
- Integrated circuit costs are rising aggressively
- ASIC complexity has lengthened development time
- R&D resources and headcount are decreasing
- Revenue losses for slow time-to-market are increasing
- Financial constraints in a poor economy are driving low-cost technologies
These trends make FPGAs a better alternative than ASICs for a larger number of higher-volume applications than they have been historically used for, to which the company attributes the growing number of FPGA design starts (see History).
Some FPGAs have the capability of partial re-configuration that lets one portion of the device be re-programmed while other portions continue running.
Complex programmable logic devices (CPLD)
The primary differences between CPLDs (complex programmable logic devices) and FPGAs are architectural. A CPLD has a somewhat restrictive structure consisting of one or more programmable sum-of-products logic arrays feeding a relatively small number of clocked registers. The result of this is less flexibility, with the advantage of more predictable timing delays and a higher logic-to-interconnect ratio. The FPGA architectures, on the other hand, are dominated by interconnect. This makes them far more flexible (in terms of the range of designs that are practical for implementation within them) but also far more complex to design for.
In practice, the distinction between FPGAs and CPLDs is often one of size as FPGAs are usually much larger in terms of resources than CPLDs. Typically only FPGA’s contain more complex embedded functions such as adders, multipliers, memory, and serdes. Another common distinction is that CPLDs contain embedded flash to store their configuration while FPGAs usually, but not always, require an external nonvolatile memory.
With respect to security, FPGAs have both advantages and disadvantages as compared to ASICs or secure microprocessors. FPGAs’ flexibility makes malicious modifications during fabrication a lower risk. Previously, for many FPGAs, the design bitstream is exposed while the FPGA loads it from external memory (typically on every power-on). All major FPGA vendors now offer a spectrum of security solutions to designers such as bitstreamencryption and authentication. For example, Altera and Xilinx offer AES (up to 256 bit) encryption for bitstreams stored in an external flash memory.
FPGAs that store their configuration internally in nonvolatile flash memory, such as Microsemi‘s ProAsic 3 or Lattice‘s XP2 programmable devices, do not expose the bitstream and do not need encryption. In addition, flash memory for LUT provides SEU protection for space applications.[clarification needed]
Applications of FPGAs include digital signal processing, software-defined radio, ASIC prototyping, medical imaging, computer vision, speech recognition, cryptography, bioinformatics, computer hardware emulation, radio astronomy, metal detection and a growing range of other areas.
FPGAs originally began as competitors to CPLDs and competed in a similar space, that of glue logic for PCBs. As their size, capabilities, and speed increased, they began to take over larger and larger functions to the state where some are now marketed as full systems on chips (SoC). Particularly with the introduction of dedicated multipliers into FPGA architectures in the late 1990s, applications which had traditionally been the sole reserve ofDSPs began to incorporate FPGAs instead.
Traditionally, FPGAs have been reserved for specific vertical applications where the volume of production is small. For these low-volume applications, the premium that companies pay in hardware costs per unit for a programmable chip is more affordable than the development resources spent on creating an ASIC for a low-volume application. Today, new cost and performance dynamics have broadened the range of viable applications.
Common FPGA Applications
- Aerospace and Defense
- Missiles & Munitions
- Secure Solutions
- ASIC Prototyping
- Connectivity Solutions
- Portable Electronics
- Digital Signal Processing (DSP)
- High Resolution Video
- Image Processing
- Vehicle Networking and Connectivity
- Automotive Infotainment
- Real-Time Video Engine
- Switches and Routers
- Consumer Electronics
- Digital Displays
- Digital Cameras
- Multi-function Printers
- Portable Electronics
- Set-top Boxes
- Distributed Monetary Systems
- Transaction verification
- BitCoin Mining
- Data Center
- Load Balancing
- High Performance Computing
- Super Computers
- SIGINT Systems
- High-end RADARS
- High-end Beam Forming Systems
- Data Mining Systems
- Industrial Imaging
- Industrial Networking
- Motor Control
- CT Scanner
- Surgical Systems
- Industrial Imaging
- Secure Solutions
- Image Processing
- Video & Image Processing
- High Resolution Video
- Video Over IP Gateway
- Digital Displays
- Industrial Imaging
- Wired Communications
- Optical Transport Networks
- Network Processing
- Connectivity Interfaces
- Wireless Communications
- Connectivity Interfaces
- Mobile Backhaul
The most common FPGA architecture consists of an array of logic blocks (called Configurable Logic Block, CLB, or Logic Array Block, LAB, depending on vendor), I/O pads, and routing channels. Generally, all the routing channels have the same width (number of wires). Multiple I/O pads may fit into the height of one row or the width of one column in the array.
An application circuit must be mapped into an FPGA with adequate resources. While the number of CLBs/LABs and I/Os required is easily determined from the design, the number of routing tracks needed may vary considerably even among designs with the same amount of logic. For example, a crossbar switch requires much more routing than a systolic array with the same gate count. Since unused routing tracks increase the cost (and decrease the performance) of the part without providing any benefit, FPGA manufacturers try to provide just enough tracks so that most designs that will fit in terms of Lookup tables (LUTs) and I/Os can be routed. This is determined by estimates such as those derived from Rent’s rule or by experiments with existing designs.
In general, a logic block (CLB or LAB) consists of a few logical cells (called ALM, LE, Slice etc.). A typical cell consists of a 4-input LUT, a Full adder (FA) and a D-type flip-flop, as shown below. The LUTs are in this figure split into two 3-input LUTs. In normal mode those are combined into a 4-input LUT through the left mux. In arithmetic mode, their outputs are fed to the FA. The selection of mode is programmed into the middle multiplexer. The output can be either synchronous or asynchronous, depending on the programming of the mux to the right, in the figure example. In practice, entire or parts of the FA are put as functions into the LUTs in order to save space.
ALMs and Slices usually contains 2 or 4 structures similar to the example figure, with some shared signals.
CLBs/LABs typically contains a few ALMs/LEs/Slices.
In recent years, manufacturers have started moving to 6-input LUTs in their high performance parts, claiming increased performance.
Since clock signals (and often other high-fan-out signals) are normally routed via special-purpose dedicated routing networks in commercial FPGAs, they and other signals are separately managed.
For this example architecture, the locations of the FPGA logic block pins are shown below.
Each input is accessible from one side of the logic block, while the output pin can connect to routing wires in both the channel to the right and the channel below the logic block.
Each logic block output pin can connect to any of the wiring segments in the channels adjacent to it.
Similarly, an I/O pad can connect to any one of the wiring segments in the channel adjacent to it. For example, an I/O pad at the top of the chip can connect to any of the W wires (where W is the channel width) in the horizontal channel immediately below it.
Generally, the FPGA routing is unsegmented. That is, each wiring segment spans only one logic block before it terminates in a switch box. By turning on some of the programmable switches within a switch box, longer paths can be constructed. For higher speed interconnect, some FPGA architectures use longer routing lines that span multiple logic blocks.
Whenever a vertical and a horizontal channel intersect, there is a switch box. In this architecture, when a wire enters a switch box, there are three programmable switches that allow it to connect to three other wires in adjacent channel segments. The pattern, or topology, of switches used in this architecture is the planar or domain-based switch box topology. In this switch box topology, a wire in track number one connects only to wires in track number one in adjacent channel segments, wires in track number 2 connect only to other wires in track number 2 and so on. The figure below illustrates the connections in a switch box.
Modern FPGA families expand upon the above capabilities to include higher level functionality fixed into the silicon. Having these common functions embedded into the silicon reduces the area required and gives those functions increased speed compared to building them from primitives. Examples of these include multipliers, generic DSP blocks, embedded processors, high speed I/O logic and embedded memories.
FPGAs are also widely used for systems validation including pre-silicon validation, post-silicon validation, and firmware development. This allows chip companies to validate their design before the chip is produced in the factory, reducing the time-to-market.
To shrink the size and power consumption of FPGAs, vendors such as Tabula and Xilinx have introduced new 3D or stacked architectures. Following the introduction of its 28 nm 7-series FPGAs, Xilinx revealed that several of the highest-density parts in those FPGA product lines will be constructed using multiple dies in one package, employing technology developed for 3D construction and stacked-die assemblies. The technology stacks several (three or four) active FPGA dice side-by-side on a silicon interposer – a single piece of silicon that carries passive interconnect.
FPGA design and programming
To define the behavior of the FPGA, the user provides a hardware description language (HDL) or a schematic design. The HDL form is more suited to work with large structures because it’s possible to just specify them numerically rather than having to draw every piece by hand. However, schematic entry can allow for easier visualisation of a design.
Then, using an electronic design automation tool, a technology-mapped netlist is generated. The netlist can then be fitted to the actual FPGA architecture using a process called place-and-route, usually performed by the FPGA company’s proprietary place-and-route software. The user will validate the map, place and route results via timing analysis, simulation, and other verification methodologies. Once the design and validation process is complete, the binary file generated (also using the FPGA company’s proprietary software) is used to (re)configure the FPGA. This file is transferred to the FPGA/CPLD via a serial interface (JTAG) or to an external memory device like an EEPROM.
The most common HDLs are VHDL and Verilog, although in an attempt to reduce the complexity of designing in HDLs, which have been compared to the equivalent of assembly languages, there are moves to raise the abstraction level through the introduction of alternative languages. National Instrument’s LabVIEW graphical programming language (sometimes referred to as “G”) has an FPGA add-in module available to target and program FPGA hardware.
To simplify the design of complex systems in FPGAs, there exist libraries of predefined complex functions and circuits that have been tested and optimized to speed up the design process. These predefined circuits are commonly called IP cores, and are available from FPGA vendors and third-party IP suppliers (rarely free, and typically released under proprietary licenses). Other predefined circuits are available from developer communities such as OpenCores (typically released under free and open source licenses such as the GPL, BSD or similar license), and other sources.
In a typical design flow, an FPGA application developer will simulate the design at multiple stages throughout the design process. Initially the RTL description in VHDL or Verilog is simulated by creating test benches to simulate the system and observe results. Then, after the synthesis engine has mapped the design to a netlist, the netlist is translated to a gate level description where simulation is repeated to confirm the synthesis proceeded without errors. Finally the design is laid out in the FPGA at which point propagation delays can be added and the simulation run again with these values back-annotated onto the netlist.
Basic process technology types
- SRAM – based on static memory technology. In-system programmable and re-programmable. Requires external boot devices. CMOS. Currently in use.
- Antifuse – One-time programmable. CMOS.
- PROM – Programmable Read-Only Memory technology. One-time programmable because of plastic packaging. Obsolete.
- EPROM – Erasable Programmable Read-Only Memory technology. One-time programmable but with window, can be erased with ultraviolet (UV) light. CMOS. Obsolete.
- EEPROM – Electrically Erasable Programmable Read-Only Memory technology. Can be erased, even in plastic packages. Some but not all EEPROM devices can be in-system programmed. CMOS.
- Flash – Flash-erase EPROM technology. Can be erased, even in plastic packages. Some but not all flash devices can be in-system programmed. Usually, a flash cell is smaller than an equivalent EEPROM cell and is therefore less expensive to manufacture. CMOS.
- Fuse – One-time programmable. Bipolar. Obsolete.
Other competitors include Lattice Semiconductor (SRAM based with integrated configuration flash, instant-on, low power, live reconfiguration), Actel (now Microsemi, antifuse, flash-based, mixed-signal), SiliconBlue Technologies (extremely low power SRAM-based FPGAs with optional integrated nonvolatile configuration memory; acquired by Lattice in 2011), Achronix (SRAM based, 1.5 GHz fabric speed), and QuickLogic (handheld focused CSSP, no general purpose FPGAs).
- Application-specific instruction-set processor (ASIP)
- Application-specific integrated circuit (ASIC)
- Field programmable object array (FPOA)
- Combinational logic
- Complex programmable logic device (CPLD)
- Computing with Memory A time-multiplexed reconfigurable architecture using 2-D memory array
- Digital Clock Manager DCM – Digital Clock Management
- Erasable programmable logic device (EPLD)
- FPGA prototype
- Gate array
- Handel-C Extended C based description language designed for FPGAs
- Hybrid-core computing
- Impulse CoDeveloper (Impulse C)
- JHDL: Just-Another Hardware Description Language
- Multi-gigabit transceiver or Serdes – Serial transceivers now becoming very common in the FPGA fabric
- MyHDL Python based HDL—generates Verilog or VHDL; Some prefer MiGen
- Programmable Array Logic (PAL), an early PLD
- Partial re-configuration
- Programmable Logic Array
- Reconfigurable computing
- Soft processor
- Software Defined Silicon (SDS)
- SystemC System Description Language—C like
- Verilog: Hardware Description Language
- VHDL: VHSIC (Very High Speed Integrated Circuit) Hardware Description Language
How One Man Hacked His Way Into the Slot-Machine Industry
- By Brendan I. Koerner
- July 15, 2011 |
- Categories: Wired August 2011
Photo: Todd B. Lussier
Rodolfo Rodriguez Cabrera didn’t set out to mastermind a global counterfeiting ring. All he wanted was to earn a decent living doing what he loves most: tinkering with electronics. That’s why he started his own slot-machine repair company in Riga, Latvia. Just to make a little cash while playing with circuit boards.
Born and raised in Camagüey, Cuba, Cabrera always had an affinity for technical pursuits. Once, after winning a student essay contest in 1976, he was given a personal audience with Fidel Castro. When the dictator asked the 10-year-old what he wanted to be when he grew up, Cabrera confidently replied, “An architectural engineer.”
Nine years later, after becoming obsessed with airplanes as a teenager, Cabrera won a scholarship to Riga Civil Aviation Engineers Institute, home to one of the Soviet Union’s finest aeronautical-engineering programs. While working toward his degree, he fell in love with an older Latvian woman, and though he was expected to return to Cuba after graduation to serve Castro’s regime, Cabrera decided to stay in Riga and build a new life designing and working on aircraft.
But soon after Cabrera completed his degree, Latvia broke free from the dying Soviet Union. The newly independent country had no aerospace industry of its own, and thus no aerospace jobs. Instead of fixing jet engines, Cabrera was forced to make money repairing radios and telephones. In 1994 he accepted a gig with a company called Altea, servicing the boxy videogame consoles found atop Eastern European bars, where they offer drunks the chance to waste a few coins answering trivia questions or playingTetris.
by Brendan I. Koerner (37.9 MB .mp3)
As Latvia became more open and prosperous, slot machines began to pop up in the nation’s bars, clubs, and supermarkets, creating new repair opportunities for Altea. Though he wasn’t much of a gambler, Cabrera was drawn to these devices. He spent hours dissecting slot electronics to learn everything he could about how they worked. The deeper he plunged, the more he came to regard slot machines as his true professional calling. So in 2004, Cabrera used his modest savings to found his own repair company, FE Electronic.
Cabrera was particularly fond of the slots made by Nevada-based International Game Technology, which he considered by far the industry’s most advanced. Like all slots, IGT’s machines are powered by proprietary circuit boards equipped with rows of memory cards; those cards, in turn, contain each game’s unique software. To prevent piracy, the boards are designed to reject memory cards unless they’re accompanied by a security chip programmed with an uncrackable authorization code.
Like any good hacker, Cabrera decided to express his admiration for IGT’s technology by trying to beat it. Using blueprints meant to assist casino service personnel, he figured out a way to solder a half-dozen jumper wires between the memory cards and the motherboards, completing circuits that circumvented the machine’s security. This gave him the ability to load any IGT game he wanted onto the boards. If he was given a used Pharaoh’s Gold machine, for example, he could convert it to a Cleopatra II by swapping in freshly programmed memory cards.
However innocent his initial intentions, Cabrera quickly saw the business potential in this breakthrough. He knew that converting machines without IGT’s OK wasn’t legal. But this was Latvia, he figured, where capitalism is wild and woolly. Surely no one would notice if he made a few bucks on the side by hacking IGT’s tech.
There was a time when casinos only grudgingly tolerated slot machines. In the early years of Las Vegas, slots were relegated to the perimeter of casino floors, where they were expected to gobble up coins from women waiting on their blackjack-playing husbands. The machines’ mechanical gears required constant maintenance, and the games were magnets for cheats. Scammers became adept at techniques like affixing coins to fishing lines or covertly prying open service doors to monkey with the reels.
But a salesman named William “Si” Redd had the foresight to realize that digital technology would eventually transform slots into a revenue powerhouse. In the early 1970s, Redd was the independent Nevada distributor for the Bally Manufacturing Corporation of Chicago, which made the popular Money Honey slot machine. Flush with cash from sales of that game and others like Big Bertha, Redd started acquiring tiny startups that were pioneering videogames, which at the time were considered little more than engineering novelties. One of his acquisitions, Raven Electronics of Reno, was developing a video blackjack machine; another, Nutting Associates of Mountain View, California, had created Computer Space, a primitive forerunner of Asteroids.
Redd planned on using these startups’ know-how to help create video slot machines, which would replace fickle gears with reliable circuit boards. Such machines would require less maintenance and be less susceptible to cheating than their analog predecessors. In the midst of Redd’s buying spree, Bally offered to purchase his distributorship. Redd agreed with one condition: that he be allowed to retain the video-related patents he had acquired. Bally myopically took the deal, and Redd went off to found the A-1 Supply Company—later renamed International Game Technology.
Just as Redd had foreseen, IGT’s video machines were a boon to casinos. In 1971, slots generated 36 percent of Nevada’s gaming revenue; by 1981, with digital slots on the rise, that figure was up to 44 percent. But slots didn’t truly become America’s favorite casino pastime until a Norwegian mathematician named Inge Telnaes came up with the most brilliant gambling innovation since the point spread.
The problem with slot machines, as Telnaes saw it, was that their jackpots were limited by the number of reels they could use. Since players expected each reel to have no more than 10 to 15 symbols, a machine needed many reels to make the odds long enough to justify a huge payout when all the cherries or bells settled into a row. But the more reels a machine had, the more players were reminded of the fact that their quest for riches would likely end in futility; no one wanted to try their luck on a machine with dozens of reels (or, alternatively, hundreds and hundreds of symbols on enormous reels).
Telnaes’ solution to this conundrum was US Patent Number 4,448,419, awarded in 1984. His invention called for slot machine results to be determined not by the spinning of reels but by a random-number generator. The reels on such a machine would display only a visual representation of the generator’s results, lining up when a winning number spit forth or (far more frequently) settling into a losing mishmash of symbols. The patent made possible the development of slot machines that could offer extremely long odds—and thus enticingly massive jackpots—while still appearing to have just a few tumblers. IGT wisely purchased Telnaes’ patent in 1989, thereby guaranteeing itself a steady stream of royalties as its competitors adopted random-number generators, too.
By 1990, slot machines accounted for a full two-thirds of Las Vegas’ gaming revenue, a percentage that has remained fairly constant ever since. Slots took over the prime casino real estate previously reserved for blackjack and roulette; three-quarters of gaming-floor acreage in Las Vegas is now inhabited by slots. And IGT grew into the industry’s Goliath, with annual revenue of close to $2 billion and a coveted spot on the S&P 500 index. Roughly half of America’s 833,000 slot machines are produced at IGT’s manufacturing plant in Reno.
Armed with detailed intelligence regarding gamblers’ behavior, IGT’s designers now tailor each new machine to appeal to a specific type of player. “One of the things that really defines how a game plays is volatility of the math model,” says Chris Satchell, the company’s CTO, who previously filled the same role at Microsoft’s videogame division. Some games, he explains, are based on algorithms that produce frequent but small payouts, ensuring that risk-averse players are able to play for long stretches before losing their bankrolls. High-volatility games, by contrast, offer large jackpots but long odds of winning and are thus designed to attract gamblers who want a quick shot at a big score. Creating those varied experiences, while still ensuring that the house always wins a predictable amount over the long run, requires the expertise of professional mathematicians. IGT scours the nation’s graduate mathematics programs in search of talent who would rather develop slots software than devise Wall Street trading algorithms.
Slots manufacturers have recently come to view game consoles as a serious threat to their business; they fear that younger gamblers in particular might prefer to stay home and play L.A. Noire than trek to a casino. So to give players the illusion that they’re doing something more interactive than clicking on a random-number generator, many slots now offer periodic bonuses like free spins or minigames. These can be customized to an individual player’s preferences, based on information stored on their casino loyalty cards, which are inserted into the machine during play. The systems that determine how and when these bonuses kick in have become the subject of fierce patent wars between IGT and its competitors, particularly Bally; the two companies have been locked in litigation for much of the past decade.
Among digital devices, slots are unique in the amount of regulation they must endure. Government overseers rely on several testing facilities—the largest of which are run by the Nevada Gaming Control Board, the other by Gaming Laboratories International of Lakewood, New Jersey—to verify that new machines perform exactly as their manufacturers promise. For starters, the devices must pay out as stipulated on their spec sheets; if a slot is designed to return 92.3567 cents of every dollar played over its lifetime, it better deliver precisely that amount over thousands upon thousands of laboratory spins. The machine must also prove capable of standing up to the ravages of power outages, 20,000-volt shocks, and numerous spilled daiquiris. “You need to be as secure as banking applications and as robust as military applications,” Satchell says. “Because if there’s a customer issue, you have to be able to trace what happened.” If a casino’s losses are found to have been caused by faulty software, the machine’s manufacturer could be on the hook for reimbursement.
Since slot software is so difficult and costly to perfect, companies such as IGT jealously guard their programs as trade secrets of the highest order. “The industry considers intellectual property the most significant asset they have,” says David Schwartz, director of the Center for Gaming Research at the University of Nevada, Las Vegas. A company like IGT simply won’t stand for anyone stealing its lifeblood.
With his hack of IGT’s circuit boards, Rodolfo Rodriguez Cabrera had stumbled into a terrific opportunity. He knew that the most-devoted slots players care a great deal about novelty, which is why IGT and its competitors roll out hundreds of new games every year. Casinos must periodically refresh their floors with updated machines or risk losing loyal customers to competitors who understand that IGT’s The Hangover is now a much more desirable game than IGT’s Dick Clark’s Bloopers. But new machines typically start around $10,000. Cabrera realized he could make a tidy profit by buying used slots, updating them with fresh games, then reselling them to budget-conscious casinos in Europe.
Russia was then gearing up to outlaw most casinos, which meant cheap used machines were flooding into the Baltics. The big challenge for Cabrera would be to develop an extensive library of IGT games; pirating code was not his forte. He solved that issue by hiring a local to write a software-cracking program called IGT Quad Clone, which allowed Cabrera to rip the software from any IGT memory card to a Windows-based computer, using a standard USB connection. The game program could then be flashed onto new cards with a plug-and-play programming device that Cabrera had purchased from a Russian merchant, no questions asked.
Photo: US Attorney’s Office, District of Nevada
Soon Cabrera was doing a brisk trade selling his refurbished machines to customers throughout Europe. As FE Electronic began to thrive, Cabrera came up with a clever way of fattening his profit margins even more: Instead of buying and revamping used machines, he would simply manufacture his own. All the necessary parts were readily available on the secondhand market: IGT’s stock cabinets and proprietary circuit boards, as well as generic components like LCD monitors and power supplies. When Cabrera added up all the expenses, including printing glass signage to make the games look authentic and even faking IGT serial number plates, the cost was still considerably less than buying a genuine used machine from Russia.
Demand for these new machines was so strong that Cabrera had to go on a hiring spree; FE Electronic’s staff ballooned to 20 employees; most spent their days soldering jumper wires onto IGT’s proprietary circuit boards. Cabrera, meanwhile, continued to hone his mastery of the machines. He figured out a way to make the games work with just four or five memory cards each, instead of the 16 cards IGT normally uses. Cabrera took pride in the fact that he was improving the technology of a company he held in the highest regard.
In early 2006, shortly after returning from a gaming expo in London, Cabrera received a phone call from an American named Henry Mantilla. A former project manager at the Palms in Las Vegas, Mantilla had recently moved to Cape Coral, Florida, to join Aqua Gaming, a company that sells refurbished slot machines worldwide. He had heard through the industry grapevine that Cabrera had a special knack for fixing damaged IGT circuit boards. Might the company study how Cabrera performed his craft? Cabrera readily agreed.
Nearly a year later, in January 2007, Mantilla and his boss, Aqua Gaming president Charles Frost, paid a visit to FE Electronic. It was a huge moment for Cabrera, a chance to expand his booming business to a whole new hemisphere. When his guests arrived that day, Cabrera beckoned them through a service door and up a flight of stairs. The trio entered a spacious workshop where tiny plumes of white smoke hung in the air—the product of multiple soldering irons making connections simultaneously. Four employees sat hunched over a workbench, tweaking electronics; others had their heads buried in slot-machine cabinets, installing LCD monitors and button sets. In the room stood 40 finished machines, each indistinguishable from a genuine IGT product.
Cabrera ushered Frost and Mantilla into a side room, where he popped open a briefcase. Inside was the burner he used to load IGT software onto new memory cards. He boasted to the Americans that he could duplicate any IGT game on the market. Frost snapped photographs of the counterfeiting equipment as the Spanish-speaking Mantilla translated Cabrera’s spiel.
The Americans’ visit didn’t end with a major deal, but Mantilla and Cabrera managed to develop a warm bond. Several weeks after his return to the US, Mantilla called Cabrera to discuss his frustrations with Aqua Gaming. He wasn’t happy at his job, and he yearned to strike out on his own. Mantilla suggested that Cabrera could assist with that plan by making him FE Electronic’s exclusive US distributor, in exchange for 50 percent of all sales. He stressed that his language skills would come in handy when dealing with Latin American clients, and that he still had strong contacts in Las Vegas.
Cabrera was wary of partnering with someone who was just starting out, but he was won over by Mantilla’s genial charm. Mantilla was a young father with a good heart and something to prove; Cabrera figured he would be plenty motivated to move product. He agreed to make Mantilla’s new company, Southeast Gaming, his sole representative in the Americas.
Just as he’d promised, Mantilla started doing extraordinary business right away. FE Electronic shipped containers full of machines to Mantilla in Florida or directly to brokers on the Eastern Seaboard and in Latin America with whom he had set up deals. The two men faithfully split the proceeds right down the middle; during their first year in business together, Mantilla wired at least $400,000 to Cabrera’s Hansabank account in Riga, a fortune by Latvian standards. Few slots dealers could resist the lure of prime IGT machines for pennies on the dollar.
IGT realized something was amiss in mid-2007. Sales of its machines were suddenly plummeting in Peru. The company began to suspect that counterfeit slots were to blame. When its engineers took apart several suspicious machines pulled from casino floors, they found circuit boards that had been modified with jumper wires and off-brand memory cards. IGT quickly discovered that the Peruvian casinos were getting these slots from suppliers who dealt with customers all over the world, including the US. “This was no small problem,” says Robert Melendres, IGT’s chief legal officer. “This was millions of dollars in business.”
Meanwhile Cabrera and Mantilla had developed a problem of their own: They had so many orders to fill that they could barely keep pace. Building and shipping machines was both time consuming and expensive, with each cargo container full of merchandise costing around $30,000 to send across the Atlantic. So Mantilla branched out into a less cumbersome line of business: selling Cabrera’s pirated software so slot dealers could build their own machines—any established refurbisher would be able to easily get fresh cabinets and signs. He sold the programs preloaded onto memory cards, along with detailed instructions on how to do the jumper-wire hack to make the cards work.
With his newfound wealth, Cabrera moved into a sparkling modern apartment in a neighborhood just east of downtown Riga. His first marriage had dissolved years earlier, and he decided to try again, this time with his longtime girlfriend, Olga, a gorgeous woman 15 years his junior. Cabrera made a triumphant return to Cuba for the wedding, which offered him a chance to show his extended family just how prosperous he had become. Henry Mantilla and his wife, Vanessa, were there to toast the happy couple’s future together.
On the afternoon of April 15, 2009, Cabrera decided to take a short break from work to hit the gym. When he returned, he found a fleet of vans from Latvia’s Ministry of the Interior blocking FE Electronic’s driveway. Thirty cops in body armor were streaming in and out of the building, wheeling out dozens of slot machines.
Cabrera was baffled by the number of police officers. He immediately wondered if the Latvian government had mistaken him, a tax-paying small-business owner, for some sort of mafioso. But then he noticed that one of the cops standing watch over the front door had dark brown hair—something of a rarity in Latvia, where much of the population is blond. As the man turned to speak to a colleague, Cabrera saw a can of Coca-Cola jutting from a side pocket of his backpack. That was when Cabrera understood what was going on: the Americans had come for him.
As Cabrera glumly watched his business get stripped bare, the brown-haired cop’s fellow FBI agents in the US were busy raiding Southeast Gaming and three other companies suspected of receiving or selling FE Electronic merchandise.
IGT had provided the FBI with the locations of alleged counterfeit machines, and the bureau had quickly traced them through various middlemen all the way back to Southeast Gaming. Apparently Mantilla hadn’t been too careful in his dealings. “He asked if I wanted to buy some cloned boards—he said, ‘Look, we reverse-engineer these,’” says Nevin Moorman, owner of East Coast Slots of Pompano Beach, Florida, who had been approached by Mantilla. “I said I wouldn’t touch that shit with a 10-foot pole—I’m too pretty and I ain’t that big, so I don’t want to go to prison.”
The FBI had little trouble luring Mantilla into doing business with an informant, a Las Vegas slot dealer who repeatedly purchased preloaded memory cards from Southeast Gaming. Mantilla grew to trust this informant so much that he eventually offered him one of Cabrera’s burners. He was willing to do so because he needed help; Southeast Gaming had too many orders to fill, so he wanted someone to assist with burning software onto memory cards. “That raised the stakes,” says Thomas Dougherty, a trial attorney with the US Department of Justice’s Computer Crime and Intellectual Property Section. “It created a lot more urgency, in that we were concerned about them transferring the ability to counterfeit these devices so others could flood the market.”
Cabrera spent just two days in Latvian custody before being released. His lawyers advised him that the worst punishment he faced was community service. But then in August 2009, Cabrera was suddenly rearrested and sent to Riga Central Penitentiary, where he was informed that he was likely to become the first criminal ever extradited from Latvia to the US.
Cabrera was astonished. He knew his business ran afoul of the law, but it wasn’t like he was causing anyone physical harm.
What Cabrera failed to understand was that his operation had exposed a major vulnerability at a multibillion-dollar company—”one of the major corporate citizens of Nevada,” as Dougherty calls IGT. And there is nothing that slot manufacturers fear more than losing control of their code. An example had to be made of the Cuban-Latvian hacker.
“I never thought that I would ever go on a vacation to the US,” Cabrera says in Spanish, chuckling slightly. We are sitting in a visitor’s room at a jail in Haskell, Texas, separated by a thick pane of Plexiglas. A wiry, neatly groomed 45-year-old who looks like a Latin version of Scotty from Star Trek, Cabrera explains that this is the 10th detention facility he’s passed through since arriving in the US. The worst of the lot was a privately run prison in Eden, Texas, where his fellow inmates rioted over poor conditions and had to be subdued with tear gas.
The lowest moment, though, came right after he and Mantilla were sentenced in Las Vegas last August. Having pled guilty to conspiracy to produce and sell counterfeit IGT slot machines, the former partners were handed identical sentences: two years in prison and a $151,800 fine. (Had they gone to trial, they would have risked getting up to 45 years each.) Cabrera was then suited up in a straitjacket, chained to some other inmates, and loaded into a prisoner transport van for a ride to Chaparral, New Mexico, where he was to be processed into the federal penal system. During the 15-hour trip across the boiling-hot desolation of western and southern Arizona, he just stared at the scrub brush, wondering how his life had gone so awry.
Having been credited with time served for the months he spent in Latvian custody, Cabrera is now awaiting deportation back to Riga. But that process has proven more complicated than anyone anticipated. Though he moved to Latvia in 1985, Cabrera never became a citizen; he instead kept renewing his residency permit every five years. His latest permit expired while he was incarcerated, meaning that he can’t go home. Cabrera had a Cuban passport, but it was seized upon his arrival in the US. He is now stateless.
As he waits to see whether the US and Latvia can sort out his immigration status, Cabrera spends 23 hours a day locked in his cell. The isolation has given him plenty of time to ponder how he got into this mess. “I am a person who can fix things,” he says. “And there is a time when a person who can fix things, when he has been doing it long enough, realizes he can do something more, too. And the moment you realize that is the moment you’ve just done something illegal.”
If Cabrera does make it back to Latvia, he vows to take his career in a radically different direction; he claims that he would like to help gambling addicts, though he is vague on the specifics. He does not believe that his retirement from slot counterfeiting is any great cause for celebration at IGT, however. “What I was doing, it is a common thing,” he says with a shrug. “If you studied electronics, you could do it, too.” Especially if you love to tinker.
Contributing editor Brendan I. Koerner (email@example.com) wrote about US manufacturing in issue 19.03.
The Static Motor Analyzer – is a predictive maintenance solution which offers flexibility in providing fault recognition in a single portable instrument. They integrate a wide range of electrical tests, including surge comparison, DC hipot, step voltage, continuous ramp, mega-ohm and winding resistance tests.
Some manufacturers are Baker-SKF, Electrom Instruments and Samatic.
Does Technology Aid the Workforce?
I always thought that engineer’s aided the workforce by developing new products that put people to work. After all, the invention of the transistor put millions to work and new technology aided the growth of the computer, automotive, and aircraft industries.
In the 21st century the situation may be reversed with too much technology causing a decrease in jobs. At least that’s the opinion of Carl Benedikt Frey and Michael A. Osborne of the University of Oxford in the U.K. These are people with the appropriate credentials. Frey is with the “Programme on the impacts of Future Technology,” and Osborne is in the Department of Engineering Science at Oxford. They highlighted only one aspect of increased technology: computerization, and it is a major consideration.
A September 2013 paper by Frey and Osborne asks the question: “The Future Of Employment: How Susceptible Are Jobs To Computerization? To answer the question they reviewed papers from dozens of sources that covered the subject. And, they employed a methodology to categorize occupations according to their susceptibility to computerization. Then, they implemented the methodology to estimate the probability of computerization for 702 detailed occupations, and examined expected impacts of future computerization on the US labor market.
Motivation for the paper came from John Maynard Keynes’s frequently cited prediction of widespread technological unemployment “due to our discovery of means of economizing the use of labor outrunning the pace at which we can find new uses for labor.” John Maynard Keynes was a British economist whose ideas have fundamentally affected the theory and practice of modern macroeconomics, and transformed the economic policies of governments. His ideas are the basis for the school of thought known as Keynesian economics, and its various offshoots.
Frey and Osborne noted that “over the past decades, computers have substituted for a number of jobs, including the functions of bookkeepers, cashiers and telephone operators.” And, they pointed out that “recently the poor performance of labor markets across advanced economies has intensified the debate about technological unemployment.” Although they said “there is ongoing disagreement about the driving forces behind the persistently high unemployment rates, a number of scholars have pointed to computer controlled equipment as a possible explanation for recent jobless growth.”
“The impact of computerization on the labor market is well-chronicled in the literature with the decline of employment in occupations mainly consisting of tasks following well-defined procedures that can easily be performed by sophisticated algorithms. For example, studies emphasize that the ongoing decline in manufacturing employment and the disappearance of other routine jobs is causing the current low rates of employment. Besides the computerization of routine manufacturing tasks, studies have documented a structural shift in the labor market, with workers reallocating their labor supply from middle-income manufacturing to low-income service occupations. “Arguably, this is because the manual tasks of service occupations are less susceptible to computerization, as they require a higher degree of flexibility and physical adaptability. At the same time, with falling prices of computing, problem-solving skills are becoming relatively productive, explaining the substantial employment growth in occupations involving cognitive tasks where skilled labor has a comparative advantage, as well as the persistent increase in returns to education.
“According to Brynjolfsson and McAfee (2011), the pace of technological innovation is still increasing, with more sophisticated software technologies disrupting labor markets by making workers redundant. What is striking about the examples in their book is that computerization is no longer confined to routine manufacturing tasks. The autonomous driverless cars, developed by Google, provide one example of how manual tasks in transport and logistics may soon be automated. In the section “In Domain After Domain, Computers Race Ahead”, they emphasize how fast moving these developments have been. Less than 10 years ago, in the chapter “Why People Still Matter”, Levy and Murnane (2004) pointed at the difficulties of replicating human perception, asserting that driving in traffic is insusceptible to automation: “But executing a left turn against oncoming traffic involves so many factors that it is hard to imagine discovering the set of rules that can replicate a driver’s behavior.” Six years later, in October 2010, Google announced that it had modified several Toyota Priuses to be fully autonomous (Fig. 1).
To the authors’ knowledge, no study has yet quantified what recent technological progress is likely to mean for the future of employment. “This present study intends to bridge this gap in the literature. Although there are indeed existing useful frameworks for examining the impact of computers on the occupational employment composition, they seem inadequate in explaining the impact of technological trends going beyond the computerization of routine tasks.”
Current literature distinguishes between cognitive and manual tasks on the one hand, and routine and non-routine tasks on the other. “While the computer substitution for both cognitive and manual routine tasks is evident, non-routine tasks involve everything from legal writing, truck driving and medical diagnoses, to persuading and selling. In the present study, we will argue that legal writing and truck driving will soon be automated, while persuading, for instance, will not. Fig.2 shows the ENON personal assistance robot.
“Drawing upon recent developments in Engineering Sciences, and in particular advances in the fields of Data Mining, Machine Vision, Computational Statistics and other sub-fields of Artificial Intelligence, we derive additional dimensions required to understand the susceptibility of jobs to computerization. Needless to say, a number of factors are driving decisions to automate and we cannot capture these in full. Rather we aim, from a technological capabilities point of view, to determine which problems engineers need to solve for specific occupations to be automated. By highlighting these problems, their difficulty and to which occupations they relate, we categorize jobs according to their susceptibility to computerization. The characteristics of these problems were matched to different occupational characteristics, allowing us to examine the future direction of technological change in terms of its impact on the occupational composition of the labor market, but also the number of jobs at risk should these technologies materialize.”
“While computerization has been historically confined to routine tasks involving explicit rule-based activities , algorithms for big data are now rapidly entering domains reliant upon pattern recognition and can readily substitute for labor in a wide range of non-routine cognitive tasks. In addition, advanced robots are gaining enhanced senses and dexterity, allowing them to perform a broader scope of manual tasks. This is likely to change the nature of work across industries and occupations.”
In this paper, we ask the question: how susceptible are current jobs to these technological developments? To assess this, we implemented a novel methodology to estimate the probability of computerization for 702 detailed occupations. Based on these estimates, we examine expected impacts of future computerization on labor market outcomes, with the primary objective of analyzing the number of jobs at risk and the relationship between an occupation’s probability of computerization, wages and educational attainment.”
“We distinguish between high, medium and low risk occupations, depending on their probability of computerization. We make no attempt to estimate the number of jobs that will actually be automated, and focus on potential job automatability over some unspecified number of years. According to our estimates around 47 percent of total US employment is in the high risk category. We refer to these as jobs at risk – i.e., jobs we expect could be automated relatively soon, perhaps over the next decade or two.”
“Our model predicts that most workers in transportation and logistics occupations, together with the bulk of office and administrative support workers, and labor in production occupations, are at risk. These findings are consistent with recent technological developments documented in the literature. More surprisingly, we find that a substantial share of employment in service occupations, where most US job growth has occurred over the past decades are highly susceptible to computerization. Additional support for this finding is provided by the recent growth in the market for service robots and the gradually diminishment of the comparative advantage of human labor in tasks involving mobility and dexterity.”
“Finally, we provide evidence that wages and educational attainment exhibit a strong negative relationship with the probability of computerization. We note that this finding implies a discontinuity between the nineteenth, twentieth and the twenty-first century, in the impact of capital deepening on the relative demand for skilled labor. While nineteenth century manufacturing technologies largely substituted for skilled labor through the simplification of tasks, the Computer Revolution of the twentieth century caused a hollowing-out of middle-income jobs . Our model predicts a truncation in the current trend towards labor market polarization, with computerization being principally confined to low-skill and low-wage occupations. Our findings thus imply that as technology races ahead, low-skill workers will reallocate to tasks that are non-susceptible to computerization – i.e., tasks requiring creative and social intelligence. For workers to win the race, however, they will have to acquire creative and social skills.”
The US Patent and Trademark Office may be the key battleground in today’s high-tech lawsuits, but it’s also home to a trove of inventions that have fallen into the public domain. Now patent lawyer Martin Galese is trying to bring some 21st century tech to the charming ideas patented in the 19th and 20th centuries. He’s dug up eccentric creations — from anEscher-esque building block to a combination comb and hair clip — and is rebuilding them using digital modeling tools, allowing anyone with a 3D printer to own a once-patented work from the past.
“You’re holding the 19th century by way of something that was produced in the 21st century,” Galese told The New York Times. Galese said that he sees the intricate drawings that accompany many patents as beautiful works of art, but that isn’t the aspect he appreciates most: the real idea of his Patent-Able blog, where all of his 3D models are featured, is to help people see the patent office as as wealth of ideas, and not just the impetus for endless legal battles.
MOST PATENTS HAVE FALLEN INTO THE PUBLIC DOMAIN
Galese notes that there are over 8 million registered patents — and according to thePatently-O law blog’s estimates, only about 2.1 million of those were still in force last year. Just over a dozen patents have been featured so far on Galese’s blog, and he’s still on the lookout for “cool, weird, [and] surprisingly useful” ideas from the past to turn into 3D models. He uses MakerBot’s Thingiverse website — which collects and shares user-generated 3D models — to host all of his recreations. Galese thinks that it’s a fitting home for them: the patent office’s archives, he told the Times, are really just the “original Thingiverse.”