for reference purposes for all those interested in learning about + making music with the original hardware + software from the late 1980s + early 1990s thanks for visiting
Who | MIDI Interfaces | Software | TIMELINE | PCI | ISA | ATARI ST | DOS/PC | WIN/PC | MAC | Drums | Synths | Modules |Sequencers | Samplers | Tape Rec | Mix Consoles |
Artists |Recent | VST | ios inst | E-mu | Ensoniq | Akai "S" Series | Akai MPCs | Roland "S" Series | Drum Machines | Roland JV Addons | early 90s Synths | late 80s synths
guests are encouraged to: REGISTER to view file attachments + add relevant videos, downloads, resources. (credit your SOURCE!).to post a vid just paste url!

Recent Posts


some detals of motherboards that support:
Pentium 120/133 Support
Pentium 75/90/100
Pentium 60/66
80386 SX/DX

cpu - 80s / 8086 vs 68000 a comparison (1987)
«  by chrisNova777 on Today at 07:16:45 AM »

(Written in 1987)

I. Introduction

Among the high-end microprocessors currently available, two of the most widely used are the Intel 8088 and the Motorola MC68000 (usually abbreviated to just 68000).  Both are members of small families of microprocessors.  The 8088 family includes the 8086 and the more powerful 80286 and 80386; it can trace its lineage through the earlier 8080 and 8008 all the way back to the first microprocessor, the 4004. Since the 8088 is identical to the 8086 in most respects, and since the 8086 is considered to be the “parent” of the 8088 chip’s microprocessor family, I will refer to the 8086 in this paper; however, everything said about it also applies to the more widely-used 8088. The 68000 family contains, besides the 68000 itself, the 68008, the 68010, the 68012, and the 68020.  These processors generally have additional registers and operation codes beyond what the 68000 has; however, I will not be dealing with them.

The rivalry between the 8088-based IBM PC family and the 68000-based Apple Macintosh series sets one wondering how the microprocessors they use perform.  I will be examining these two microprocessors from a programmer’s point of view; the only references to hardware characteristics will be those that are important to most assembly-language programmers using the chips.

II. User Registers

The registers that are available to the assembly-language programmer are where some of the most obvious major differences between the 8086 and the 68000 can be found.  The 8086 contains 14 16-bit registers, four of which can also be used as two eight-bit registers.  All of the 8086’s registers are used for specific functions, although several can be used for general storage as well.

The four “general purpose” registers in the 8086 are called AX, BX, CX, and DX to indicate their “eXtended” (16-bit) length; their two halves can also be referenced as eight-bit registers called AH and AL, BH and BL, CH and CL, and DH and DL, for the High and Low halves of the 16-bit value. The AX register is the primary accumulator; many instructions, such as MOV and AND, have special short forms which specifically deal with the AX register (or the AL register, for eight-bit operations). The BX, CX, and DX registers can also be used as eight- or 16-bit accumulators, but they have unique functions to which they are dedicated in some situations: BX is used as a base pointer in certain addressing modes, CX is used as a loop counter, and DX is used to hold I/O port addresses.

Besides the four “general purpose” registers, the 8086 has four 16-bit index registers, called SP (the stack pointer), BP (the base pointer, used similarly to BX in some addressing modes), SI (source index for string operations), and DI (destination index for string operations). It also has a 16-bit PC register (the program counter) and four 16-bit “segment” registers, called CS (code segment), DS (data segment), SS (stack segment), and ES (extra segment). I will deal more fully with the segment registers in the Addressing Modes section of the paper. Finally, the 8086 has a 16-bit Status register, also called the Flags register or the Program Status Word (PSW).  This register contains nine one-bit flags which keep track of the results of arithmetic operations and certain operating modes; some of the flags will be discussed in detail later on.

The 68000 contains 18 32-bit registers and one 16-bit register, giving it considerably more internal storage space than the 8086.  In addition, the 68000 registers come much closer to being “general purpose” than do those of the 8086.  The 68000 has eight 32-bit “data” registers, named D0 through D7. They can be used to hold 32-, 16-, or 8-bit values, depending on the instructions used to manipulate them. The 16- and 8-bit instructions affect only the appropriate number of lower order bits in the registers.  The data registers can all be used for all arithmetic operations, and most other operations as well.

The 68000 also has seven 32-bit general-purpose “address” registers, named A0 through A6. These registers can also be used for 16-, but not 8-bit operations. They can be used in some arithmetic operations, but they are usually used to hold the addresses of operands which will be manipulated.

The 68000 also has a 32-bit program counter (PC) and two 32-bit stack pointers, called USP (user stack pointer) or A7, used in “user mode,” and SSP (system stack pointer) or A7′, used in “supervisor mode” (see the Other Differences section below for a more complete description of the 68000’s operation modes). However, the USP register can be copied to or from any of the address registers during supervisor mode by using the MOVE instruction.  Finally, the 68000 has a 16-bit SR (Status Register) register, of which the lower eight bits are available in user mode and are called the CCR (Condition Code Register). The CCR essentially holds the results of arithmetic calculations, as with the 8086 Flags register; the rest of the 68000 SR holds other operation mode flags.

In summary, while some of the registers in these two processors share similar functions, e.g. the stack pointers, program counters, and status registers, their “general purpose” registers differ greatly. The 68000 has more of them (15 vs. six, including SI and DI), and they can hold larger values and be used for a greater variety of purposes than those on the 8086.

III. Addressing Modes

The 8086 and the 68000 also differ in how they find the operands for their machine language opcodes, although their addressing modes look less similar at first glance than they actually are.  The 8086, as mentioned before, has a 16-bit program counter and four 16-bit “segment registers.” Depending on the operation being performed, the value of one of the segment registers, shifted left by four bits (in effect multiplying it by 16), is added to an “effective address,” calculated in one of the ways described below, resulting in a 20-bit absolute address.  This means that the 8086 can directly address one megabyte of memory. For fetching instructions, the Code Segment register is used; for stack operations, the Stack Segment register; for certain string operations, the Extra segment register; and for most other operations, the Data Segment register.  The Data Segment default can be overridden to use another segment register in most cases.

The 8086 can access its operands in one of twelve general ways, called addressing modes. They are:

Immediate Mode:
1. Operand is in opcode
Direct Mode:
2. Operand is in register specified in opcode
3. Address of operand is in DS+16 bit signed displacement following opcode
Direct, Indexed Mode:
4. Address of operand is in DS+DI or SI+8 or 16 bit signed displacement
Implied Mode:
5. Address of operand is in DS+DI or SI
Base-Relative, Direct Mode:
6. Address of operand is in DS+BX
7. Address of operand is in DS+BX+8 or 16 bit signed displacement
Base-Relative, Direct, Indexed Mode:
8. Address of operand is in DS+BX+DI or SI+8 or 16 bit signed displacement
Base-Relative, Implied Mode:
9. Address of operand is in DS+BX+DI or SI
Base-Relative, Direct Stack Mode:
10. Address of operand is in SS+BP+8 or 16 bit signed displacement
Base-Relative, Direct, Indexed, Stack Mode:
11. Address of operand is in SS+BP+DI or SI+8 or 16 bit signed displacement
Base-Relative, Implied Stack Mode:
12. Address of operand is in SS+BP+DI or SI

Note that there are actually more than twelve combinations of registers used for addressing, because of the many places where either DI or SI can be used.  However, the above are the twelve patterns that the 8086 uses to compute effective addresses.  Note the dedicated use of specific registers for each addressing mode.

The 68000, in contrast, has no need for the segment registers because of its 32-bit program counter. In fact, the program counter actually allows larger addresses than the chip’s hardware can address: the 68000’s address bus has only 24 bits.  Still, that is enough to handle 16 megabytes, compared with the 8086’s one megabyte address space.  In addition, the specialized register uses of the 8086 are largely missing in the 68000; the main distinction is between the seven address registers and the eight data registers.  Any data register can be used for any purpose that any other data register can; likewise for the address registers.

Here are the available addressing modes in the 68000:

Immediate Mode:
1. Operand is in opcode
Direct Mode:
2. Operand is in register specified in opcode
Absolute Mode:
3. Address of operand is in opcode
Address Register Indirect Mode:
4. Address of operand is in An (n=0-6)
5. Address of operand is in An; predecrement An (n=0-6)
6. Address of operand is in An; postincrement An (n=0-6)
Address Register Indirect Mode/Program Counter Indirect Mode:
7. Address of operand is in An or PC+16 bit signed displacement (n=0-6)
8. Address of operand is in An or PC+Xm+16 or 32 bit signed displacement (n=0-6; Xm=A0-A6 or D0-D7)

Again, there are actually many more combinations of registers than there are access patterns, i.e. addressing modes.  The 68000’s addressing modes tend to each use fewer registers than the 8086’s do; however, most of that difference is due to the 8086’s use of segment registers.  Once they are taken into account, the addressing modes of the two processors are remarkably similar; the largest difference between them is that the 68000 modes can be used with many more registers than can the 8086 modes.  Aside from that, the 8086 lacks predecrement and postincrement modes, a minor omission.

IV. Instruction Sets

Many of the differences between the instruction sets of the 8086 and the 68000 have already been dealt with in the section on User Registers and the section on Addressing Modes.  Differences in those areas lead directly to readily evident differences in the instruction sets, for example the sizes of the values saved on the stack by the CALL/JSR instructions. Below I will discuss notable differences not covered earlier.

The standard assembler mnemonics for the 8086 and the 68000 differ considerably.  Although both use capital letters, the 8086 uses the order “destination, source” in its operand field while the 68000 uses the reverse.  Also, instructions which perform essentially the same function are given different names by the two assemblers. For example, the 8086 CALL (subroutine) performs analogously to the 68000 JSR (Jump to SubRoutine), and the 8086 SBB (SuBtract with Borrow) performs the same function as the 68000 SUBX (SUBtract with eXtend).  Other mnemonics have smaller differences, for example: MOV on the 8086 is MOVE on the 68000 (and as noted above, the order of operands is reversed).

“String operations” means the instructions used to implement operations on consecutive memory elements, usually with some kind of automation or semi-automation. The 8086 has special instructions included for just this purpose. They automatically perform an operation (moving or comparing data), then increment or decrement (depending on the state of the Direction flag in the Flags register) index registers (SI and/or DI) to point to the next element. If they are preceded by the REP instruction, they also decrement and test a counter (CX); if it is not zero, they keep looping. Perhaps the weakest aspect of these instructions, called MOVS, LODS, STOS, SCAS and CMPS, is that they inflexibly use specific registers for each aspect of their operation. Otherwise, they are faster and shorter than the corresponding normal loops would be.

The 68000 does not have dedicated string operation instructions. Instead, it has the predecrement and postincrement addressing modes, mentioned earlier, which can be used with any of the address registers.  These are used with the standard conditional branch instructions.  Thus although the 68000 is more flexible in its implementation of string functions, it cannot have loops which are as tight as those the 8086 can.

Stack operations are an area where the 68000 is clearly more versatile than the 8086.  The 8086 uses PUSH and POP instructions which can only move 16-bit values onto or off of the system stack. To implement more than one stack, the programmer would need to use MOV along with DEC (DECrement value) and INC (INCrement value) on the register or memory location chosen as the second stack’s pointer. The 68000, by contrast, handles stack operations by using the predecrement (for push) and postincrement (for pop) addressing modes on anyone of the address registers, including A7, the USP register. The 68000 can therefore not only move 16-bit values onto or off of the stack, but it can perform other operations such as AND and CLR on stack values, and the MOVEM (MOVE Multiple registers) instruction can be used to good advantage to quickly push or pop many registers at once to or from the stack. The 68000 can also do 32- and 8-bit stack operations.  However, register A7 won’t accept 8-bit operations, to guarantee that words it contains will start on even addresses (see the section on Other Differences below). The 68000 also has two instructions, LINK and UNLK, which allocate and deallocate, respectively, a stack frame for temporary storage.  The 8086 has a form of the RET instruction which adds a given value to the stack pointer before pulling the address off of the stack, but it is most useful for cleaning up at the end of subroutines which are passed values on the stack.

In the area of integer arithmetic, the 8086 is slightly more versatile than the 68000. It can add and subtract 8 or 16 bit values, with or without adding in the carry; it can multiply two 8-bit or two 16-bit values, signed or unsigned; and it can divide 16 by 8 bit or 32 by 16 bit values, signed or unsigned. The 68000 can add and subtract 32 bit values in addition to the operations that the 8086 can do, but it can’t do the 8*8 bit multiplication or the 16/8 bit division that the 8086 can. Of course, those operations are easy to do with the multiplication and division instructions that it does have, simply by clearing the upper parts of the affected registers beforehand.

Besides the regular binary arithmetic operations, the 8086 has instructions to adjust the result so that it can do arithmetic on BCD and ASCII values, as well. The 68000 has special instructions for doing BCD arithmetic, but it has nothing equivalent to the 8086’s ASCII adjustment instructions.

Although the interrupt handling facilities of the 8086 and the 68000 differ considerably, if we limit ourselves to looking at the software interrupt instructions we find that they are very similar. The 8086 INT instruction causes a jump to any of 224 addresses in low memory; there are actually 256 available, but 32 of them are reserved for other purposes.  INTO uses one of those reserved addresses to cause an interrupt if the overflow flag in the Flags register is set. Both INT and INTO push the Flags, CS and PC registers onto the stack before jumping. The IRET instruction returns from an interrupt, restoring the values of Flags, CS and PC from the stack.

The 68000 has similar instructions with different names: TRAP causes a jump to any of 192 addresses in low memory (the other 64 are reserved) and TRAPV causes an interrupt if the overflow flag is set, both of them saving the values of PC and SR on the stack. RETE is the 68000 equivalent of IRET. Note that although the 68000 has 8 priority levels for interrupts, compared to the 8086’s one level, software interrupts on the 68000 always operate at the same priority, so software interrupts are really no different from those on the 8086 in that respect.  There is one small difference, however: interrupts on the 68000 cause the processor to enter supervisor mode, which the 8086 does not have.

In looking over the instruction sets of the 8086 and the 68000, I found a few other oddities I would like to discuss. The first instructions in the alphabetical lists for both processors made me wonder if someone was playing a joke on me: AAA and ABCD on the 8086 and 68000, respectively. The abbreviations stand for Adjust result of ASCII Addition and Add Binary Coded Decimal, but they sure look to me like they were contrived to be cute.  That’s okay. It’s fun to see something like that sneak into general production.

The 8086’s assorted odd instructions include LAHF and SAHF, which load and save the lower byte of the Flags register into the AH register. That seems rather specialized to me. There is also JCXZ, which Jumps if the CX register is Zero. It is the only instruction in the set which tests a full register instead of just a bit or two, and it reinforces the specialized use of CX as a counter.  Last, there is the XLAT instruction, which moves the value pointed to by AL + BX into AL.  It is intended for use in table lookups, but it seems very limited and specialized to me: there is no choice as to which registers may be used.  However, in that respect, it is consistent with what seems to be the general register use philosophy of the 8086.

The 68000 also contains some unusual instructions. In addition to the standard bit test instruction, BTST, it has BCHG, BCLR, and BSET, which combine two operations: they test a bit and then complement, clear, or set it respectively. It also has instructions for dealing with variable-length bit fields, with the mind-numbing mnemonics BFEXTU (Bit Field EXTract Unsigned), BFFFO (Bit Field Find First One), BFINS (Bit Field INSert), BFSET (Bit Field SET), and BFTST (Bit Field TeST). I never thought I’d see three F’s in a row in an assembly language instruction!

Another 68000 instruction which combines two operations is CAS2, which Compares And Swaps two values. It is specifically useful in a multiprocessor environment, where if the operation was done in several steps another processor could grab control of the bus in the middle of the process and corrupt the operation. CHK is an unusual instruction which checks if the value in a data register is between 0 and a given boundary.  Unlike most comparison instructions, if the value is outside of those limits, it causes an interrupt (a trap) rather than setting a flag. Finally, there is RTR, which pulls not only the PC but also the CCR from the stack. It is like RTE, which can only be used in privileged mode, except that it does not restore the high byte of the SR.

Overall, in the majority of areas the 68000 has a more powerful and flexible instruction set than the 8086. That difference is due largely to its having more and larger registers and allowing more versatility in their use.

V. Other Differences

There are a few other differences which will be important to most assembly language programmers on these machines but which do not fit into any of the above categories.  For example, the 8086 allows words (16-bits) to start at any address, while the 68000 requires that words and long words (32-bits) start at even addresses.  Also, the two processors use opposite orders of storage for multibyte numbers: the 8086 stores the lowest valued byte first, the 68000 the highest. And lastly, the 8086, like most microprocessors, has only one operating mode. The 68000, designed more with multi-user environments in mind, has two operating modes: user mode, which is the normal one, and supervisor mode, which is able to use certain privileged instructions affecting the Status Register and I/O devices, like RTE and RESET.  As mentioned above, interrupts cause the 68000 to enter supervisor mode. Having two different modes is useful in preventing ordinary users from changing data belonging to the system or to other users, or even crashing the system.

VI. Conclusion

Both the 8086 and the 68000, and their related families of microprocessors, contain a powerful assortment of registers, addressing modes, and instructions.  Both are able to perform high-precision arithmetic, allow large address spaces, and can access their operands in many different ways.  The 68000, however, has more and larger registers, and allows much greater versatility in their use. In addition, it has a privileged operating mode, which the 8086 lacks, to protect the system from disaster in complex operating environments. From the programmer’s point of view, then, the 68000 seems to be a more desirable chip to use than the 8086.


All technical data on the 8086 is from:

Rector, Russell, and Alexy, George. The 8086 Book. Berkeley, California: Osborne/McGraw-Hill, 1980.

All technical data on the 68000 is from:

Leventhal, Lance A.; Hawkins, Doug; Kane, Gerry; and Cramer, William D. 68000 Assembly Language Programming, Second Edition. Berkeley, California: Osborne/McGraw-Hill, 1986.
intel Pentium II (May 1997) / anandtech reviews the k6
«  by chrisNova777 on Today at 07:09:00 AM »

AMD K6 Review
by Anand Lal Shimpi on April 3, 1997 4:51 PM EST
Posted in
+ Add A
AMD had a knockout on their hands with the K5, unfortunately manufacturing problems and lagging delays kept them from grabbing a significant amount of the market. By the time the K5 was performing up to speed Intel had already announced and released their much faster Pentium MMX series of processors and Cyrix was already working on a successor to the 6x86. AMD's answer? A high powered yet low cost alternative to Intel's Pentium Pro and Pentium II, otherwise known as the K6.

The K6's main advantages include:

A highly advanced RISC86 core which decodes complex instructions into much smaller RISC86 operations for greater performance
64KB of L1 Cache
8.8 Million transistors piled up on a die 162 mm2 die using IBM's patented "flip-chip" technology
The smallest 6th generation processor out today!
High scalability from .30 micron fab process to a .25 micron fab process for use with upcoming K6 processors
Excellent price to performance ratio when compared to slower and more expensive Intel processors
First Generation AMD K6 Series Microprocessor
Chip Name   P-Rating   Clock Speed   Bus Speed x Multiplier
AMD K6-PR2/166   PR2/166   166MHz   66 MHz x 2.5
AMD K6-PR2/200   PR2/200   200MHz   66 MHz x 3.0
AMD K6-233   N/A   233MHz   66 MHz x 3.5
After its introduction in April, AMD quickly dropped the PR2 rating of the K6 in favor of actual clock speed ratings.  An intelligent decision on AMD's part, since the PR2 ratings were confusing to many interested in the K6.

Under Windows 95, the performance of the K6 is nearly identical to Intel's more expensive Pentium II. The major differences between the K6 and the Pentium II are its FPU performance and 32 bit Windows NT performance. If neither of those matter to you (I should mention here that the K6's FPU is very well made, and extremely fast contrary to popular belief) then the AMD K6 is a superb choice for a microprocessor. What does the future hold for the K6? Lets take a look at some of AMD's future plans...

AMD K6 266 (a.k.a. AMD K6+)

This bad-boy will be AMD's successor to their first generation K6 processor, although it has been referred to as the AMD K6+ for some time now, the K6-266 will be a normal AMD K6 with a few interesting improvements on the original design.  The FPU of the K6-266 won't be a pipelined FPU, contrary to popular belief, however AMD has made some improvements to the K6's core which will result in faster FPU performance overall.  Currently, the K6's FPU isn't as great as it could be however it is in no way a "poor" FPU.  The Winbench98 FPUMark scores indicate that the AMD K6 isn't nearly as fast as the Pentium MMX in Floating Point operations, however if your main use for a computer is business applications with a few FPU intensive applications/games on the side the AMD K6 is still an excellent processor.

The AMD K6-266 will not be the power consuming daemon its predecessors were, requiring a core voltage of ~2.5v and producing a mere 8W of heat the K6-266 will be a prime candidate for mobile systems as well as desktops.  Expect some nice overclocking options with the K6-266, especially when coupled with the 83.3MHz bus speed setting found on some of the most stable Socket-7 motherboards.


AMD has decided to take the proprietary route with the K6 in the first half of 1998 with the introduction of the K6-3D.  The K6-3D will essentially be a K6, with a few new features, such as a proprietary set of 3D instructions designed to aid in speeding up 3D geometry calculations.  Unfortunately these instructions will be rivaled by Intel's MMX2 instruction set as well as Cyrix's own set of 3D functions, meaning that unless a standard is adopted by all of the manufacturers, both hardware and software, these instructions will remain un-used...much like our current set of MMX instructions.

Along with the addition of a proprietary set of 3D instructions, the K6-3D will be the first Socket-7 processor to embrace a 100MHz bus speed officially, and likewise will be introduced with a minimum clock speed of 300MHz.   Naturally, the K6-3D, although maintaining full compatibility with the Socket-7 specification, will require a new motherboard with support for the 100MHz bus speed in order to achieve its full performance potential.  It may be possible, although not very intelligent, to run the K6-300 at 66 x 4.5 provided that AMD does built in support for the 4.5x clkmul internally on the K6-3D, however with the K6-3D expect your next motherboard purchase to be based on a non-Intel chipset since Intel still refuses to support any bus speeds greater than the good 'ole 66MHz setting we've put up with for the past 2 years.

Physically the AMD K6-3D will feature a total of 9.3 million transistors, on a die size approximately 81 square mm.  Whether or not AMD will continue to use IBM's Flip-Chip technology with future K6 processors, including the K6-3D, is unknown however it would seem very unlikely that AMD would completely discontinue their current manufacturing methods in favor of a new procedure, especially this late into the year.  The manufacturing process which will be used to implement the K6-3D will of course, be a 0.25 micron process like Intel's new Tillamook processor as well as their upcoming additions to the Pentium II series.

AMD K6+ 3D

In order to take some of the lime light away from Intel when they release their long awaited successor to the Pentium Pro, the Deschutes, AMD will be releasing a Deschutes of their own in a sense.  In the late Third Quarter of 1998, AMD will be introducing the product of most of their efforts, the AMD K6+ 3D (what is it with companies and 3D these days?) The AMD K6+ 3D will feature all of the goodies the K6-3D will, as well as some "Pro-like" features which should have Intel shaking. Why?

AMD will supposedly introduce the K6+ 3D, with a full 256KB of 4 way set associative L2 cache, ON CHIP, running AT clock speed. 

The K6+ 3D will also feature an optional Level 3 Cache (L3) commonly found in high end processors such as Digital's 21264, commonly known as the Alpha. The actual size of the L3 cache has yet to be determined, however expect it to be in the range of 1MB - 8MB in order to stay competitive with the market's demands and Intel's offerings with their Deschutes.  The L1 cache size of the K6-3D and the K6+ 3D will most likely be at least 128KB, with the possibility of it being as great as 256KB although more than 256KB of L1 cache will quickly become a manufacturing limitation and will eventually drive the cost of the processor beyond what most are willing to pay for a non-Intel solution.   The K6+ 3D's die size, somewhat larger due to the L2 cache, of 135 square mm will house a whopping 21.3 million transistors.  I wouldn't become too happy about that number since the 21.3 million transistors include those required for the onboard L2 cache and not the processor exclusively.

Marketing 101

AMD's marketing angle for the K6-266, K6-3D, and K6+ 3D will be their price to performance ratio.  If AMD can deliver the chips the market demands, on time, and in great quantities at an affordable cost, they will gain the 30% of the market they are looking for.  However, if AMD is plagued once again by distribution problems, and if the price of the next generation K6 chips isn't what the market is willing to pay for a low cost alternative, then Intel will once again cast their dark shadow over the microprocessor industry.

1998 will be the year for competition, most of which Intel plans to eliminate with the introduction of their 64-bit masterpiece, the Merced a year later in '99.  AMD will continue to pursue their high goals even after the K6 processor dies out, especially with the rumored compatibility between Slot-1 and AMD's upcoming K7 to be released in 1999.

AMD K6 Performance

The following tests were conducted using the same configuration used in all AMD K6 motherboard tests, and can be found on the Socket-7 Motherboard Comparison Guide from which these scores were extracted.  Expect more scores to be up later this week, including a full set of Business Winstone 98 scores.

Windows 95 Performance of the AMD K6
Chip   Business Winstone 97
K6-166 (66MHz x 2.5)   52.1
K6-188 (75MHz x 2.5)   Not Run
K6-200 (66MHz x 3.0)   54.5
K6-208 (83MHz x 2.5)   58.9
K6-225 (75MHz x 3.0)   57.5
K6-233 (66MHz x 3.5)   57.0
K6-250 (83MHz x 3.0)   59.9
The Business Application performance of the AMD K6 is outstanding, expect more scores in this category later, especially Business Winstone 98 scores which test multitasking/task switching capabilities, a feature not found in Winstone 97.

AMD K6 Performance
Chip   FPU Mark   CPU Mark32
K6-166 (66MHz x 2.5)   540   492
K6-188 (75MHz x 2.5)   606   554
K6-200 (66MHz x 3.0)   647   524
K6-208 (83MHz x 2.5)   673   609
K6-225 (75MHz x 3.0)   728   604
K6-233 (66MHz x 3.5)   753   558
K6-250 (83MHz x 3.0)   807   675
The performance of the AMD K6 under Windows 95 is excellent, however the FPU Mark scores are a bit low compared to Intel's Pentium MMX and Pentium II processors.

From these scores you can easily tell (or you will be able to tell) that the K6 is not the world's best Quake performer, however, when coupled with a Diamond Monster 3D, it isn't all that bad.  The 3D Winmark 98 scores indicate that the K6's 3D performance, which is very FPU intensive, is not as strong as that of the Pentium II, however it is very competitive when compared to the Pentium MMX.

In order for you to implement and utilize Ultra ATA technology (also referred to as Ultra DMA or Bus Mastering) your system must have all four of the following elements:

Ultra ATA/33, Ultra ATA/66, or Ultra ATA/100
1 compatible chipset or host adapter.
2 capable system BIOS.
3 device drivers.
4 An Ultra ATA capable hard drive or CD-ROM.
"the move to Ultra66 required 80pin IDE cables instead of the existing 40pin IDE cables"

Back in 1997, with Intel's release of the i430TX chipset one of the most highly boasted features of the new chipset standard was its support for the Ultra ATA/33 hard drive interface standard. Ultra ATA/33, by definition, allowed for burst transfer rates of up to 33.3MB/s for compliant EIDE devices over the PCI bus. Ultra ATA/33 was, at the time, the latest attempt at a low-cost competitor to the high-end SCSI standard for storage devices. The reason for the move to Ultra ATA/33, which was an effective doubling of the previous burst transfer rate standard for EIDE devices (DMA Mode 2 - PIO Mode 4) was basically because of the internal improvements in EIDE hard drives, making the drives reach a point where they could retrieve data internally faster than they could send it to the host controller. The situation provided a bit of a dilemma, as any case where a performance bottleneck is present would, in this case, the easiest solution came in the form of the Ultra ATA/33 standard which doubled burst transfer rates and bought the industry a couple more years until the performance bar needed to be lifted once again.

For those of you that were in to computer hardware when the TX chipset became popular it's quite difficult to remember exactly when Ultra ATA/33 took off, as it was a highly criticized "feature" due to its relatively small performance improvement over previous standards. Today, if you look at any EIDE hard drive, chances are you won't find anything that isn't Ultra ATA/33 compliant, isn't it funny how changes come to be?

Just as the industry reached that limitation in 1997, the time for the next "big" jump in hard drive interface standards is upon us, let's say hello to Ultra ATA/66.

pentium II 233 / 266 / 300 released on May 7th 1997

APRIL 22, 1997 3:00 PM PDT
Intel (INTC) will introduce its latest generation of processors, the Pentium II, on May 7, the company said today in an official announcement.
The Pentium II is the next generation Pentium Pro processor. It is expected to be introduced in 233-, 266-, and 300-MHz versions, as previously reported by CNET NEWS.COM.

The Pentium II is distinguished from the Pentium Pro by the addition of MMX technology, which increases performance for multimedia functions such as graphics, and video and audio playback. The Pentium II is also constructed differently--it will come on a small module, or what Intel calls a "cartridge," which holds the chip and the cache memory. Cache is very high-speed memory that boosts performance of the chip.

Although Intel has made no official announcements about pricing yet, the 233-MHz Pentium II processor is expected to be priced slightly below $600 with 512K of "L2" cache memory on the module. Pricing for a 266-MHz version is expected to be just over $700. Pricing for the 300-MHz version is expected to be between $1,500 and $2,000--and likely closer to $2,000, according to sources.

Sytems with Pentium II processors are expected to be priced initially above $2,500, with most over $3,000.

Intel is expected to standardize Pentium II processors with 512K of cache, as opposed to the 256K version, according to sources. Usually, the larger the cache memory, the better the performance.
intel Pentium II (May 1997) / Asus P2B (May 1997)
«  by chrisNova777 on Today at 06:26:45 AM »

Review Date: 4/18/98
May 5, 1997
Intel released the Pentium II
May 5, 1997   ASUS complimented the release of the Pentium II with their first Pentium II motherboard, the KN97-X
April 15, 1998   Intel released the 440BX AGPSet
April 15, 1998   Once again ASUS complimented the release of the BX chipset with their first motherboard to make use of the chipset, the P2B.
ASUS is at it again, this time armed with the full power and potential of the 440BX chipset as well as a few unofficially supported yet documented bus speed settings on the new P2B.  Adhering to the unwritten ASUS Code, the P2B was designed to break barriers and become a leader to be followed by the competition, did ASUS accomplish their understood goals with the P2B?   How well does the P2B compare to the fierce competition from the BX6 and AX6B?   Let's find out.
Anand Tech Report Card Rating
Motherboard Specifications
CPU Interface   Slot-1
Chipset   Intel 440BX
L2 Cache   N/A (on-chip)
Form Factor   ATX
Bus Speeds   66 / 75 / 83 / 100 / 103 / 112 MHz
Clock Multipliers   2.0x - 8.0x
Voltages Supported   1.5v - 3.5v (Auto Detect)
Memory Slots   3 168pin DIMM Slots (EDO/SDRAM)
Expansion Slots   1 AGP Slot
4 PCI Slots
3 ISA Slots (1 Shared / 3 Full Length)
The Good
Much like the LX based-P2L97 which made its introduction last year the P2B is available in a fairly tiny ATX form factor, and taking a step back from the loosely enforced PC98 Standard (which calls for a configuration made up entirely of AGP/PCI slots) ASUS chose to outfit their first BX board with 4 PCI, 3 ISA, 1 AGP, and 3 DIMM slots for peripheral/memory expansion.  Those of you with a few ISA cards laying around will be pleased to know that ASUS hasn't forgotten about you all entirely, while those of you that are ready and waiting for the jump to PCI Modems and Sound Cards may be a little disappointed in the presence of only 4 PCI slots.
From an engineering point of view the P2B is nothing short of a success, ASUS' clever placement of the plentiful Electrolytic capacitors on the board between the Pentium II's SEC Slot, the BX Chipset, and the Memory Banks make the board more and more appetizing.  The presence of only 3 DIMM slots eliminates the need for an external DRAM Data Buffer like that found on the ABIT BX6 and Soyo SY-6BA. 
ASUS originally intended to hop on the Jumperless CPU Setup Bandwagon in 1997 with the first revision of their P2L97, unfortunately after a few isolated problems with their custom made Configuration Utility it became clear that ASUS wasn't ready for the Jumperless Setup World just yet.   Sticking to the more conventional (and sometimes more reliable) Jumper-Driven configuration, the P2B's setup is almost identical to that of the old KN97-X.  With the clock multipliers and bus speed settings documented on the motherboard, the well-written ASUS User's Manual isn't even necessary for the basic setup of the motherboard.   p2b.jpg (9231 bytes)
The P2B supports clock multipliers ranging from 2.0x - 8.0x in 0.5x steps, as well as the highly anticipated 66/100MHz bus speed setting combination.  In addition to the supported settings are a few "just-for-fun" options, including the 75, 83, 103 (Turbo Frequency), and 112MHz bus speeds.  The current revision of the P2B made no indication of a 133MHz bus speed, which isn't a big loss since making use of a 133MHz bus speed setting requires sub-8ns PC100 SDRAM, realistically 6ns modules are needed for the most stable operation at that speed.
Packaged with the motherboard is the excellent User's Manual mentioned above, in addition to the standard ASUS CD-ROM which includes the Desktop Management Interface Utility, a Flash Memory Writer (used to upgrade the Flash BIOS), LANDesk Client Manager Software, and ASUS' own PC Probe Utility.  The Flash BIOS on-board is an Award BIOS chip that features an increased amount of configuration options when compared to some of the competing BX motherboards out today.  Once again, the usefulness of the more expensive SDRAM with EEPROM or Serial Presence Detect becomes evident when using it with newer motherboards, including the P2B.  The Chipset Features Setup under the Award BIOS allows you to manually set the SDRAM CAS Latency, RAS to CAS Delay, and RAS Precharge Timings or, if you happen to have SDRAM with the onboard EEPROM (SPD) then you can simply enable the Configure SDRAM By SPD option in the BIOS and everything will be taken care of. The Award BIOS Setup also includes the unique feature of enabling ASUS' Anti Boot Virus Protection, a safety feature although not a true replacement for Anti-Virus Software.
The P2B made it up to 112MHz x 4.0 just as well as the ABIT BX6 and AOpen AX6B without any problems at all, the board never crashed once during the period of extensive testing which makes it one of  the most stable BX boards out today.  Unlike the BX6 tested, the P2B managed to run an original Pentium II - 300 at 100 x 3.0 as well as 100 x 3.5 without any problems what-so-ever.  The performance of the P2B is more or less on par with the ABIT BX6, outscoring the BX6 by a full 2 Winstone points under Winstone 97 when running at 448MHz, proving that the more flexible BIOS Setup does help in tweaking performance somewhat.
The Bad
A cramped ATX layout makes the lack of a Jumperless Setup more evident on the P2B, aside from that, the presence of only 3 DIMM slots keep ASUS' first BX motherboard from being an even better product than it already is.
intel pentium III (Feb 1999) / AMD's Athlon (1999)
«  by chrisNova777 on Today at 06:22:50 AM »

We all owe something to AMD, regardless of what processor you use in your system now, chances are that your buying decision was somehow influenced by AMD. Intel advocates enjoyed the benefits of more competitive pricing and accelerated processor release schedules that put faster processors in the hands of the masses at a cheaper price. AMD supporters obviously enjoyed the benefits of an alternative to Intel and a chance to root for the underdog, something the market has a general tendency of doing. Even for AnandTech, AMD provided us with a start back in April 1997 with the release of the K6 microprocessor, it was that processor release that originally sparked the idea to start AnandTech a little over two years ago.

For those of you that weren’t much into the desktop x86 processor market in 1997 let’s set the scene. Intel’s greatest competition in the desktop processor market was Cyrix with their 6x86-PR200+, a processor that did nothing but confuse potential buyers by its 150MHz operating frequency. AMD’s threat to the market share was insignificant after receiving a huge blow to their reputation courtesy of the incredible delay in releasing the K5 processor. Intel’s Pentium was nearing the end of its life span at 200MHz and the Pentium Pro 200 was keeping high end users happy by providing excellent performance and support for heavy-duty multiprocessor servers. The hardware enthusiast looking for an increase in power could always pursue a dual Pentium system that was considerably cheaper than a single processor Pentium Pro system.

Wednesday morning, April 2nd, 1997 marked the date of the introduction of the AMD K6 processor. The rumors that surrounded the release included the leaked information that the K6-200 would be able to compete with the Pentium Pro 200 and undercut Intel’s costs by a margin of around 25%. Others claimed that the K6 would be the return of competitive non-Intel CPUs, an increasing rarity since the release of the 486. The nondisclosure agreements expired, the information embargoes were lifted, and the results published. The K6 ended up being more of a competitor to Intel’s latest Pentium MMX than the Pentium Pro, and for games, the K6 was considerably slower than Intel’s offering at similar clock speeds. The K6’s FPU was dubbed weak, and the processor entitled a low cost solution. A problem with supplying the chips to end users [at a reasonable cost] was a major issue that the ill-fated company later became known for. Who would’ve expected that AMD would be doomed to the flaws of the K6 for the next two years and that the return of the Intel competitor would put AMD chips mainly in low cost systems and tarnish the name as an underdog, and nothing more.

Tried and Failed

With every processor release following the K6, the hopes remained high, but the results were generally disappointing. The next "big break" for AMD came with the rumors that a 266MHz K6 would make it out in time for a November ’97 launch which would once again pit AMD against Intel in a battle of clock speeds. With Intel pushing the Pentium II to speeds of 300MHz and receiving decent yields using the same 0.35-micron process they had been using with the Pentium MMX AMD needed to compete on a clock for clock basis. The only way a K6 faster than 233MHz could be produced would be on a 0.25-micron fabrication process. Rumors began to surface about extremely poor yields on 0.25-micron parts from AMD and the release of the 266MHz K6 was pushed back to early 1998, around February the parts became available but Intel had already moved to 0.25-micron and was pushing 333MHz.

The next opportunity for AMD came with addressing their weakness, their FPU. AMD was in a lose-lose situation, they weren’t able to produce enough processors to compete with Intel solely on clock speed, and at the clock speeds they were currently at, they weren’t able to produce high enough performing parts to make a significant difference in the market share. Granted they were making progress, but the outlook didn’t seem good. Rumors (you’ve gotta love those) began to surface about a mysterious K6-3D processor, supposedly a 300MHz K6 with an improved FPU, possibly one that could match or even exceed Intel’s current offerings at a lower cost. AMD’s policy became to undercut Intel’s pricing whenever possible, however sometimes it just wasn't possible. AMD was involved in a price war, one that they were trying very adamantly to win, but as we've seen in the past, often when a company is involved in a price war, they stop worrying about competing on other levels, including performance.

Take two nearly identical Intel chips, both with 32KB of L1 cache, both running at nearly identical clock speeds, and see how well they perform against each other. Here are the specs of the tests I conducted, I tried to make them as equal as possible:
Intel Pentium MMX   Intel Pentium II
Chip - Standard Clock Speed/Bus Speed   Intel Pentium MMX - 233/66   Intel Pentium II MMX - 266/66
Chip - Overclocked Speed/Bus Speed   Intel Pentium MMX - 290/83   Intel Pentium II MMX - 300/66
Motherboard   Shuttle HOT-569   Megatrends FX83-A
Voltage Setting   3.30v Vio & Vcore   N/A
RAM   2 x 32MB Megatrends SDRAM DIMMs   2 x 32MB Micron 50ns EDO SIMMs
HDD   Western Digital Caviar (1.6GB)   Western Digital Caviar (1.6GB)
Video   Matrox Millennium (2MB WRAM)   Matrox Millennium (2MB WRAM)
System Cooling   Enlight 7230 ATX Tower
1.5" Heatsink/Fan Combo   Enlight 7230 ATX Tower
OEM Heatsink
Windows 95 Performance Pentium MMX vs Pentium II
Business Winstone 97

Test   Intel Pentium MMX   Intel Pentium II
Business Winstone 97   57.6   66.5
High End Winstone 97   Failed   34.4
CPUMark16   569   612
CPUMark 32   564   820
The Pentium MMX at 290.5, although clocked at a speed about 3% greater than the Pentium MMX 300, provided excellent competition. Its Business Winstone score of 57.6, although 8.9 points lower than the Pentium II at 300, is extremely good for a processor lacking any onchip (or in this case, "on card" cache). The difference between the CPUMark scores is very small, except whe ndealing with 32bit scores, the Pentium II's strongpoint. Who would've guessed that a < $300 chip could perform almost as well as a > $700 chip...both made by the same company! The only problem I see here is getting the Pentium MMX stable at 290.5MHz, when testing it, the system crashed a few times however I believe most of the crashes were caused by inefficient cooling (I didn't use any thermal compound). I picked the Shuttle HOT-569 to conduct the tests on because of its excellent performance and stability with the Pentium MMX at 290.5 as well as its SDRAM support. I chose the Megatrends FX83-A for all of the Pentium II tests since it is the fastest performing Pentium II motherboard I have tested so far, and it continues to prove its excellence in the Quake tests below...although you do get to see some interesting results...

WinQuake Performance Pentium MMX vs Pentium II
Business Winstone 97

Resolution   Intel Pentium MMX   Intel Pentium II
320 x 200   61.9 fps   57.3 fps
512 x 384   25.9 fps   35.8 fps
640 x 400   25.9 fps   31.7 fps
640 x 480   22.4 fps   31.7 fps
Why would a processor, almost twice the price of its supposedly lower performing counterpart, be outshown by it? Yep, the results say it all, at the 320 x 200 resolution under Quake the Pentium MMX at 290.5MHz is FASTER than the Pentium II! Although that trend doesn't continue to the higher resolutions, if you feel that 320 x 200 is the only resolution you want to play at there is no point in choosing the Pentium II over the Pentium MMX. I should note that I didn't use any third party performance enhancing utilities (i.e. FastVid) so I could get a raw comparison of both identically configured systems. The Pentium II rules the 512 x 384 and higher resolutions under WinQuake (or Regular Quake for that matter) however the Pentium MMX is still the fastest GLQuake performer out there (the Pentium II doesn't work too well with the voodoofx chipset).

Real World FPU Tests Pentium MMX vs Pentium II
Truespace3 Render Times

Chip   Render Time (lower is better)
Intel Pentium II - 233   13:82s
Intel Pentium II - 266   11:51s
Intel Pentium II - 300   10:30s
Intel Pentium MMX 233   14:11s
Intel Pentium MMX 262.5   12:14s
Intel Pentium MMX 290.5   10:83s
A Message from Our Founder, Anand Shimpi
I started AnandTech as a hobby on April 26, 1997. Back then it was called Anand's Hardware Tech Page, and it was hosted on a now-defunct free hosting service called Geocities. I was 14 at the time and simply wanted to share what I knew, which admittedly wasn't much, with others on the web.

In those days PCs were very expensive and you could often save a good amount of money buying components and building your own. We have our roots in reviewing PC components and technologies.

Today the definition of what constitutes a PC is much broader than it has ever been. I look at smartphones, tablets, set-top boxes, Macs, notebooks and of course desktops as PCs or more generally - computers. They all have a CPU, GPU, memory and some form of storage. These devices mostly vary in terms of how powerful they are and how you interact with them, but the components are all the same. The one thing we've done consistently since 1997 is evaluate all of these components and the devices that implement them.

In the beginning you could classify AnandTech as a motherboard review site. I reviewed over 200 motherboards on my own before we got our first motherboard editor. From motherboards we moved to CPUs then video cards (later: GPUs). We added storage, memory, cases and display reviews. Full systems came next: notebooks and desktops became part of our review repertoire. As Apple began using more of the same components we were already reviewing in its machines, we began reviewing Macs as well. As smartphones and tablets did the same, we added them to the list. We can't (and won't) review everything, but we will review those products and technologies that we can lend our methodologies and expertise to.

Today AnandTech serves the needs of readers looking for reviews on PC components, smartphones, tablets, pre-built desktops, notebooks, Macs, and enterprise/cloud computing technologies. We are among the largest technology websites, doing all of this with a level of depth that we feel isn't available elsewhere.

We while we are no longer an independent site, we still like operate like a small business with big ambitions. We are motivated by one thing and one thing only: doing right by you.

he Intel 440BX chipset has been with us ever since it was introduced in May of 1998.  This is quite unusual for a Slot-1 chipset since the first two chipsets for the Pentium II never lasted more than a few months.  The first Pentium II chipset, the 440FX, lasted only a few months, from the introduction of the Pentium II until August of 1997 when the 440LX chipset made its debut.  The latter managed to stay alive for 7 months before being replaced by the 440BX chipset.  So why is it that the BX chipset has been around for an incredible 24 months and is still being used by motherboard manufacturers?

In order to answer that question you have to go back to the theory that necessity is the mother of invention.

Intel needed a chipset for the Pentium II and they needed it at the release of the CPU in May and not a few months later.  What they ended up doing was taking the 440FX chipset, otherwise known as the Natoma, which was used on entry-level Pentium Pro motherboards and presented it to motherboard manufacturers as a Pentium II solution as well (because of the fact that the Pentium II used the same bus as the Pentium Pro).

While the Pentium II was gaining momentum, Intel was working on implementing a "new" graphics bus into their next 440 chipset, which ended up being the 440LX, the world's first AGP enabled chipset.

The upgrade to the BX came about because Intel felt the need to leave the limiting 66MHz FSB of the Pentium II 333/300/266/233 behind and replace it with a faster 100MHz FSB frequency.  This increase in FSB frequency would not only lower the clock multiplier of future Pentium II CPUs but it primarily offered a higher bandwidth data path from the CPU to the chipset and to the memory.  With the AGP bus taking up to 533MB/s of system bus/memory bandwidth, the 533MB/s of available memory bandwidth on the 66MHz 440LX chipset could potentially become a limiting factor; by increasing the system bus and memory bus operating frequency to 100MHz, the amount of available memory bandwidth also increased to 800MB/s.

The next step in the evolution of Intel chipsets came with what was then known as the Camino chipset, the successor to the popular BX.  It was originally thought that the Camino chipset, now known as the i820, would offer everything the BX chipset had to offer while adding 133MHz FSB support as well as Ultra DMA 66 support.  As more information was released, it quickly became known that the i820 would support a brand new type of memory, RDRAM, but at the same time, the chipset would be able to work alongside SDRAM.

While all of these rumors ended up coming true in one sense or another – i820 did add 133MHz FSB/Ultra DMA 66 support and it did support SDRAM in addition to RDRAM (however only if a Memory Translator Hub was implemented on the motherboard) – the fact of the matter was that the i820 as a platform was not affordable enough (thanks to the high price associated with RDRAM), didn't offer a large enough performance improvement over the 440BX, and its even poorer performance and possibly instability when used in conjunction with SDRAM and an MTH made this chipset a highly undesirable solution, even by motherboard manufacturers.

This put motherboard manufacturers in an interesting situation.  Intel was obviously pressuring them to promote and sell as many i820 motherboards as possible yet they couldn't since there wasn't a great enough demand for them.  VIA began offering their Apollo Pro 133A chipset to motherboard manufacturers that needed 133MHz FSB support without having to move to i820.  Unfortunately, in the usual manner of VIA chipsets, the Apollo Pro 133A's overall performance was not the best, and in a head-to-head comparison, at the 100MHz FSB the Intel 440BX chipset would actually pull out ahead.

The only real advantage VIA's 133A offered over the 440BX was that it officially supported the 133MHz FSB frequency, and with the 1/2 AGP multiplier, that would allow the AGP bus to operate within spec when the FSB is set to 133MHz (133/2 = 66MHz = AGP spec clock speed).

Even the first BX motherboards ever released featured unofficial support for the 133MHz FSB setting, but there was a lack of memory that could run at that frequency (as memory was just starting to ship as PC100 compatible) as well as a lack of AGP video cards that were capable of running at 89MHz, which is what the AGP bus would operate at when the FSB was raised to 133MHz (133 * 2/3 = 89MHz = 35% over 66MHz spec).

BX at 133MHz
As we mentioned in our RDRAM Performance article, the BX chipset is now capable of running, albeit unofficially, at the 133MHz FSB frequency.  What has changed since May 1998 that allows for a BX motherboard to run at 133MHz?

For one thing, motherboard manufacturers have been tweaking their designs quite a bit over the past two years.  The BX motherboard platform in general is at the point where you shouldn't have to worry too much about variations in performance or stability when going from one motherboard to the next.  This perfection of the motherboard design, especially from the companies that have had quite a few BX boards (i.e. ASUS, ABIT…) has made their reliability at higher FSB frequencies much more of a reality and less of a dream.

Secondly, the PC133 memory standard has been completed and implemented by VIA as well as refined by Intel.  There is finally memory available that was designed with a 133MHz operating frequency in mind, and not too long ago Micron began shipping their –7E parts, which are officially rated at 133MHz CAS2, which provides for an additional 5 – 10% performance improvement over 133MHz CAS3.

Click to Enlarge

Finally, video card manufacturers have been designing their video cards to operate at a greater range of frequencies that remain outside of the 66MHz AGP specification.

A combination of all three of these factors has made it possible to run BX motherboards at the 133MHz frequency, even without the presence of a 1/2 AGP clock divider (there would have to be a revision of the BX North Bridge in order to add support for that divider).  Now, not all BX motherboards are capable of running at the 133MHz FSB, but there is definitely a high chance of it working on the latest BX motherboards, especially those produced by such companies as ABIT, ASUS, AOpen, and MSI, to name a few.

Ultra DMA 66 on a BX?
It doesn't make sense to buy a BX motherboard now with hopes of it remaining capable of running the latest processors in a few months (Willamette will use a completely different bus), but one thing a lot of potential BX motherboards owners were worried about was not having Ultra DMA 66 support on their BX motherboards.

As we proved in our Ultra DMA 33 vs. Ultra DMA 66 comparison, the Ultra DMA 66 specification does not provide any tangible performance benefits for today's hard drives but that is quickly changing.  The IBM Deskstar 75GXP is supposed to be able to provide performance that is limited by the Ultra DMA 33 specification, which could cause problems for BX motherboard users since their boards would be limiting their disk performance.

Last year, when it became clear that the BX chipset would be around for at least a little while longer while Intel readied the i820 chipset, motherboard manufacturers began adding external Ultra DMA 66 controllers to their motherboards.  At that time, there wasn't really a need for Ultra DMA 66 support since no hard drives could burst at above 33MB/s, but quite a few users went after the motherboards simply because they supported Ultra DMA 66.

There are a few options for users when it comes to having Ultra DMA 66 support on a BX motherboard.  Currently Promise, CMD and High Point manufacture controllers that are being used on BX motherboards in order to add Ultra DMA 66 support.  Moreover, if your motherboard doesn't feature either one of those on-board controllers, you can always purchase an add-on card that features either one of the controllers.

High Point HPT366

Used on: ABIT BE6-II & Soyo SY-6BA+IV

Promise PDC20262

Used on: Gigabyte GA-6BX7+ & Microstar BXMaster

CMD 648

Used on: ASUS CUBX

AGP 2X vs. AGP 4X
Another one of the paper advantages the i820 and Apollo Pro 133A chipsets hold over the old 440BX is their support for AGP 4X transfer modes which are theoretically twice as fast as the AGP 2X transfer rates supported by the BX chipset.

The 32-bit wide AGP bus, when operating in 2X mode allows for a peak transfer rate of 533MB/s.  The same AGP bus, when operating in 4X mode allows for a peak transfer rate of 1.06GB/s.  Going by those two numbers alone, you definitely see where AGP 4X can hold a performance improvement over AGP 2X, but if you take into account that the amount of available memory bandwidth on your graphics card is going to be between 3 – 5GB/s (2.7GB/s for GeForce and 5.3GB/s for GeForce 2), all of the sudden this 1.06GB/s of memory bandwidth offered by AGP 4X isn't all that great.

The performance hit you get when going from local memory on your graphics card to system memory via the AGP bus is so great that the difference between the AGP 4X transfer rates and AGP 2X transfer rates remains of very little significance.

Another thing to take into account is that, since the BX chipset only supports an AGP to FSB ratio of 2/3 or 1/1, at 133MHz FSB the AGP bus will be running at 89MHz which is a full 33% over the 66MHz specification.  This also translates into a higher transfer rate across the AGP bus since the operating frequency of the bus is higher.  More specifically, at 89MHz, you get something along the lines of an AGP 3X transfer rate although a bit slower than what that would actually be (since 100MHz AGP would theoretically be equal to AGP 3X).  The actual peak transfer rate across the AGP bus then becomes around 712MB/s which is a 34% increase over the 533MB/s of AGP 2X.

In order to prove that the difference between AGP 4X and AGP 2X is negligible, we naturally ran a set of benchmarks comparing the two.  In order to eliminate all potential bottlenecks and truly compare AGP 4X and AGP 2X, we ran the benchmarks on an i820 platform with a GeForce 2 GTS.  For comparison's sake, we've included AGP 1X scores as well.

As you can see, in a normal gaming situation, there is very little difference between AGP 4X and AGP 2X.

Even in a memory intensive situation such as Quaver, the difference is not that great. Although in this particular case, the GeForce's S3TC support as well as enhanced texture management routines that are a part of the 5.22 Detonator drivers increase the efficiency of the use of local graphics memory thus minimizing the need for AGP texturing.

In a high end test which is represented by SPECviewperf, the performance difference between AGP 2X and AGP 4X is negligable but there is definitely a huge difference present between AGP 1X and the latter two transfer modes

The Candidates

We rounded up a total of seven BX motherboards for this roundup:




Gigabyte GA-6BX7+

Microstar BXMaster

Soyo SY-6BA+IV

AOpen AX6BXC Pro Gold

The first thing we noticed after running through all of the benchmarks and stability tests was that, overall, each one of the seven boards performed just about equally in terms of stability when running at 133MHz.

No board crashed more than three times during a 24-hour looped run of Content Creation Winstone 2000.  This is compared to the 6+ times that most average Apollo Pro 133A and VIA KX133 motherboards crash during the same 24-hour period.  The BX platform is definitely very refined, and even when overclocked, provided that you have properly selected your components (PCI cards don't really matter since you can run your PCI bus at 133MHz / 4 which keeps them in spec at 33MHz), your BX133 platform should be just as stable as any other 133MHz platform out there.

The highest we could push any of these boards reliably was around the 155MHz FSB.  The Soyo SY-6BA+IV was one of the only boards to run our 733MHz test chip at 155MHz x 5.5 reliably, even while running 3D games and applications.  But at 155MHz x 5.5, there was a noticeable drop in stability when compared to the SY-6BA+IV at the 133MHz setting.  We could've probably pushed the board even higher, but it lacked the FSB settings to go any higher.  Before you start asking, our ABIT BF6 was only able to get to around 150MHz before our benchmarks would no longer run reliably, so the 1MHz FSB increments above 150MHz weren't of much use.  We tested this using Micron –7E SDRAM, which is rated at 133MHz CAS2 and is the only currently available PC133 SDRAM capable of running at 133MHz CAS2.

Another issue we encountered was that on the ASUS CUBX, the CMD controller that provides for Ultra DMA 66 functionality required that we manually enable the Ultra DMA 66 setting, in spite of the fact that we were using Ultra DMA 66 drives and cables.

Regarding all of the boards that feature external Ultra DMA 66 controllers, if the drivers for those controllers were not installed properly or at all from the start, we noticed very erratic behavior under Windows often resulting in random lockups and failures to boot Windows properly, so make sure you get those drivers installed.

The HighPoint controller on the ABIT and Soyo motherboards was the only Ultra DMA 66 controller to come up as two devices under the SCSI devices section of Windows' Device Manager.  At the same time, the CMD controller on the ASUS CUBX was the only controller to come up properly as an IDE controller.  Those are just some of the odd quirks about working with these boards.
Pro Tools / Add a plugin to ALL TRACKS at once
«  by chrisNova777 on September 21, 2017, 09:59:05 PM »

How do I add a plugin to all tracks at once in Digidesign Pro Tools?
Article #15889 Updated on Apr 27, 2007 at 12:00 AM
Go to the Mix window and add a plugin as normal, except for holding down the ALT key while selecting it. This will create an instance of the plugin on all tracks. This greatly reduces time if you may want to add an EQ to all or almost all tracks of a 24 track project. You can simply remove the plugin from the undesired tracks if you wish – it does not link them together – but simply creates them.
audio - Firewire / digiesign digi 002 (july 2002)
«  by chrisNova777 on September 21, 2017, 09:46:42 PM »
Digi 002 is Digidesign's first FireWire-enabled Pro Tools workstation to include an integrated control surface.

What the system requirements for the Digi 002 and Windows/PC computers?
Article #16124 Updated on Apr 27, 2007 at 12:00 AM
Digidesign can only assure compatibility and provide support for hardware and software it has tested and approved. For a list of Digide-sign-
qualified computers, operating systems, and third-party devices, refer to the latest com-patibility information on the Digidesign
Web site (


– A Digidesign-qualified, single processor, Win-dows-
compatible computer.
– Windows XP Home Edition
– At least 256 MB RAM
– A CD-ROM drive or equivalent drive
– Color monitor, with minimum resolution of
1024 x 768

Hard Drive Requirements for Windows

For audio recording and storage in Windows,
Pro Tools LE requires one or more qualified
ATA/IDE, SCSI or FireWire drives with the fol-lowing
– Formatted with FAT16, FAT32 or NTFS file sys-tem
(FAT32 or NTFS recommended)
– Data transfer rates of 3 MB per second or
– Drive spin speed of 5,400 RPM (7,200 or faster
– Average seek time of 10.0 milliseconds or
Windows Millennium Edition and
Windows 98, Second Edition are not sup-ported
by the Digi 002.
For the latest information on compatible
hard drives, visit the Digidesign Web site
Windows XP (Oct 2001) / turn off ACPI in windows xp
«  by chrisNova777 on September 21, 2017, 09:43:41 PM »

How do I turn off ACPI mode in Windows XP?
Article #14593 Updated on Apr 27, 2007 at 12:00 AM
If you have all your device on IRQ 9 in Windows XP it’s because you are running in ACPI mode. Here’s how you can switch back to the normal Win98-controlled IRQs.

* WARNING: YOU MUST HAVE ALL HARDWARE DRIVERS AVAILABLE AT RESTART. This will re-detect ALL your hardware, and any hardware drivers needed will be asked for. THERE IS A CHANCE THIS WILL RENDER YOUR SYSTEM UNBOOTABLE, do this at your own risk.

To disable ACPI, please change the system setting from ‘ACPI-PC’ to ‘Standard-PC’. Right-Click on My Computer -> Properties -> Hardware -> Device Manager -> Computer -> Advanced Configuration and Power Interface (ACPI) PC -> Driver -> Update Driver -> Install from a list or specific location -> Don’t Search.. -> Standard-PC.

Note that changing this means that all drivers of your hardware are re-installed (keep the driver disks available). Additionally, make sure that PNP OS INSTALLED in BIOS is set to NO (very important).

Windows XP sometimes doesn’t like having the HAL changed after Windows is installed, whilst it is possible to do so a format and following the instructions on the links below is the proper way to change the role. Press F5 or F7 just after WinXP setup asks you to press F6 to setup scsi drivers from a disk. Windows setup does not give you a prompt, so continue pressing the key until the right time occurs. F7 installs standard pc with no prompt, and F5 allows you to choose the HAL of your choice or provide a 3rd party HAL.