Recent | Online | Vintage | Modern | XP | Win | Mac  OS9 | DOS | Amiga | Atari ST | Graphics | Midi io | Sequencers | Roland "MC" | E-mu | Ensoniq | Akai MPCs | Samplers | Akai "S" | Roland "S"Synths | VST Samplers | VST Synths | Roland "JV" | Modules | Drums | Mixers | Timeline | HackintoshArtists | Graphics

Welcome to Oldschooldaw.com! (Online since 2014) serving vintage computers worldwide! this is our NEW WEBHOST as we've moved from our OLD GODADDY to a brand new server location to start off 2026. thanks to Godaddy for hosting the site for 11 years and a big thank you to IONOS.COM for the great pricing going forward!

Recent Posts

Pages: « 1 2 3 4 5 6 7 8 9 10 »
41
can TLS 1.3 be supported on windows 7 ?\

Gemini said
The short answer is: Native support? No. Third-party support? Yes.

In Windows 7, the "operating system" and the "browser" handle security in two completely different ways. Here is the breakdown of how TLS 1.3 works (or doesn't work) on your machine.

1. Native Windows (Schannel): The Hard "No"
Windows 7 uses a built-in security component called Schannel to handle encrypted connections for the OS itself and for programs like Internet Explorer, early versions of Outlook, and Windows Update.

Native Limit: Schannel on Windows 7 was only ever updated to support TLS 1.2 (via a manual registry patch/update in 2017).

The Result: Any modern app that relies specifically on the Windows 7 system-level security stack will fail to connect to a TLS 1.3-only server. Microsoft has no plans to ever bring TLS 1.3 to the Win7 kernel.

2. Browsers: The "Workaround"
Most modern browsers (Chrome, Firefox, and their derivatives) do not use the Windows Schannel component. Instead, they bundle their own security libraries (like NSS for Firefox or BoringSSL for Chrome).

If you use specialized browsers designed for legacy systems, you can use TLS 1.3 perfectly:

r3dfox: This is a modern fork of Firefox specifically for Windows 7/Vista. Because it uses its own internal encryption library, it can handle TLS 1.3 connections even if the underlying Windows 7 OS doesn't know what TLS 1.3 is.

Supermium: A Chromium-based browser for legacy Windows. It brings current Chrome security (including TLS 1.3) to Windows 7.

3. Comparison of Support
The TLS handshake is the "negotiation" where the computer and the server agree on a security protocol. If the versions don't match, the connection is dropped.

Platform   Highest Supported TLS   Note
Windows 7 (Native/Schannel)   TLS 1.2   Requires KB3140245 and registry tweaks.
Internet Explorer 11   TLS 1.2   Locked to the OS's native capabilities.
r3dfox / Supermium   TLS 1.3   Handles encryption independently of the OS.
Windows 11 (Native)   TLS 1.3   Supported and enabled by default.
How to check if your Windows 7 browser has it:
You can verify this right now by visiting a site like Cloudflare’s Browsing Experience Test. It will tell you exactly which TLS version your current session is using.

Bottom line for your setup: If you are using Windows 7 for your audio work or forum management and need to access the modern web, you must use a browser with its own security stack (like r3dfox). Relying on native Windows 7 protocols will result in "Connection Reset" errors on many modern sites.

The short answer is: Yes, it can be extended to Vista. The same 3rd-party "independent security stack" strategy that works for Windows 7 also works for Windows Vista. However, because Vista is even older and has a different kernel version (NT 6.0 vs. 7's NT 6.1), the software choices are more specific.

As of 2026, here is how you get TLS 1.3 running on a Vista machine:

1. The Supermium Project (The "Golden Key")
The most reliable way to get TLS 1.3 on Vista today is Supermium.

How it works: Supermium is a fork of Chromium that has been "backported" to work on legacy Windows. Crucially, it brings its own internal encryption libraries (BoringSSL).

Vista Support: Unlike standard Chrome, which dropped Vista years ago, Supermium explicitly supports Vista (even without an "Extended Kernel" in the latest R8/R9 versions).

The Result: You can open Supermium on Vista, navigate to a TLS 1.3-only site (like a modern bank or Discord), and it will load perfectly because the browser is doing the security "handshake" that the Vista OS doesn't know how to do.

2. The "Extended Kernel" (The Advanced Route)
For enthusiasts (especially those on your OldSchoolDAW forum), there is the Windows Vista Extended Kernel by win32ss.

What it does: It modifies Vista's system files to "spoof" the version of Windows 7 or even Windows 10.

The Benefit: This allows you to install "regular" versions of browsers like Firefox 115 ESR or older versions of Chrome that already have TLS 1.3 built-in.

The Downside: It can be unstable and might interfere with legacy audio drivers (like your Digi 002 drivers), which often prefer a "pure" Vista environment.

3. Why it is harder for Vista than Windows 7
While the concept is the same, Vista faces a few extra hurdles:

Instruction Sets (SSE2): Most 3rd-party browsers with TLS 1.3 (like Supermium) require your CPU to support SSE2. If you're running Vista on a very old Pentium 3 or early Athlon XP, the browser won't even launch. (Fortunately, your Intel E8600 "Ultimate Tiger" build has SSE4.1, so you’re more than covered).

Certificates: Even if the browser supports the protocol (TLS 1.3), the OS might not have the modern Root Certificates to trust the website. You often have to manually import a "Root Certificate Update" so the browser knows that "Google" or "Microsoft" is actually who they say they are.

Quote
My Recommendation: If you need to check the web on your Vista machine, don't touch the kernel. Just download the 32-bit (ia32) installer of Supermium. It's the cleanest way to get modern security protocols without breaking the legacy environment you've worked so hard to build.
https://supermium.neocities.org/ Supermium can be used to get proper TLS 1.3 handshaking to browse the internet on Vista.
42
if vista is too old to communicate with the modern internet then what is the oldest windows os that still can in your opinion?

Gemini said
Determining the "oldest" usable Windows OS is a bit like asking what the oldest car is that can still handle a highway—it depends on whether you're willing to modify the engine.

In 2026, there are two ways to answer this: the "Vanilla" answer (what works out of the box) and the "Enthusiast" answer (what works with community help).

1. The Vanilla Winner: Windows 10 (Version 22H2)
Strictly speaking, Windows 10 is now the baseline for a reliable daily driver. Since Microsoft ended standard support for Windows 10 in October 2025, it has officially entered its "legacy" phase.

Why it works: It natively supports TLS 1.3 (the current web security standard) and SHA-2 code signing.

The Internet Catch: Most major browsers (Chrome, Firefox, Edge) still target Windows 10 as their minimum requirement for "current" versions. You can browse, bank, and stream without any special workarounds.

2. The Enthusiast "Floor": Windows XP (with Supermium)
If you are willing to use community-developed tools, Windows XP (Service Pack 3) is surprisingly the oldest OS that can still "render" the modern web.

The Secret Sauce: A project called Supermium (a fork of Chromium) was updated as recently as March 2026 to bring modern browser capabilities back to XP and Vista. It bypasses the OS's lack of security protocols (TLS 1.2/1.3) by handling the encryption within the browser itself.

The Reality Check: While you can load YouTube or Reddit on XP in 2026 using Supermium, it is highly unsafe. The OS kernel itself is riddled with unpatched vulnerabilities. It's a "look but don't touch" experience—fine for a hobbyist project, but dangerous for logging into your email.
43
what are teh main reasons windows vista is not usable as an os today? given that there are 32bit/64bit versions of vista, what technologies have emerged since the initial release of vista till today that make windows vista too old to use a daily driver

Gemini said
Windows Vista is an fascinating piece of software history, but as of 2026, it is essentially "digital amber"—a preserved moment in time that cannot interact with the modern world. While the 32-bit and 64-bit architectures are technically still relevant, the ecosystem surrounding them has completely evolved.

The "un-usability" of Vista today stems from a total breakdown in three pillars: Security, Connectivity, and Hardware Standards.

1. The Technological "Chasm" (Post-2007 Tech)
Several foundational technologies emerged after Vista that the OS simply does not understand without extreme community-made "Extended Kernels."

NVMe & SSD Optimization: Vista was designed for spinning hard drives. It lacks native support for NVMe (the fast storage in modern PCs) and, more importantly, it doesn't support the TRIM command. Without TRIM, an SSD's performance will degrade rapidly, eventually leading to drive failure or extreme slowdowns.
+1

UEFI vs. BIOS: Modern computers use UEFI Class 3, which often removes the "Compatibility Support Module" (CSM). Vista requires a legacy BIOS environment to boot. Installing it on a post-2020 machine often results in an immediate "ACPI_BIOS_ERROR" blue screen.

USB 3.0/3.1/4.0: Vista has no native drivers for USB 3.0. On a modern motherboard, your keyboard and mouse (which likely run through a USB 3.0 controller) simply won't work during or after installation.

SHA-2 Code Signing: In 2019, the industry moved from SHA-1 to SHA-2 encryption for digital signatures. Since Microsoft ended Vista's support in 2017, the OS doesn't "trust" modern drivers or updates because it can't verify their SHA-2 signatures.

2. The Browser & Web Dead-End
This is the biggest hurdle for a daily driver. The modern web relies on TLS 1.3 and HTTP/3 for security and speed.

The SSL/TLS Gap: Vista’s built-in networking stack only supports older, vulnerable versions of SSL/TLS. Most modern websites (including Google, YouTube, and banking sites) will simply refuse to connect, throwing "Secure Connection Failed" errors.

Dropped Support: Chrome, Firefox, and Edge dropped Vista years ago. Without a modern browser, you cannot run web apps, stream high-definition video (due to lack of modern DRM like Widevine), or even render basic CSS layouts used by 99% of sites today.

3. Instruction Set Evolution
Modern software is often compiled to require specific CPU instructions that didn't exist or weren't standard in the Vista era:

AVX/AVX2: Many modern applications (like the latest versions of Photoshop or Premiere) require AVX instructions found in newer CPUs. If the OS doesn't know how to manage these instructions, the programs will crash on launch.

The 64-bit Shift: While Vista had a 64-bit version, modern 64-bit software often targets the Windows 10/11 API. Vista lacks thousands of "Entry Points" (DLL functions) that modern apps call upon to function.

4. Security in 2026
Using Vista online today is genuinely dangerous.

Quote
Unpatched Kernel: Over 9 years of critical "Remote Code Execution" vulnerabilities have been discovered since Vista’s final update in 2017. A Vista machine connected directly to the internet can be compromised in minutes by automated bots without the user even clicking a link.

No Modern Antivirus: No reputable security suite still supports Vista, leaving you with no defense against modern ransomware or malware.

The Verdict: Vista is a masterpiece for retro-computing and legacy audio production (like your work with the Digi 002), where you can keep the machine offline and use period-correct hardware. As a "daily driver" for the 2026 internet, however, it is effectively a locked door with no key.
44
its possible the GMA950 was the original inspiration for AERO glass in vista! read on!!!

Intel 945 Express Chipset Family (GMA 950): This was the "gold standard" for early Vista testing. It was the first integrated graphics core to fully support the WDDM driver model required for the Aero Glass interface.

The Intel GMA 950 officially debuted on June 1, 2005, as part of the 945 Express Chipset family. For context, Windows Vista didn't reach "Release to Manufacturing" (RTM) until November 2006, and it didn't hit general availability until January 2007.

That year-and-a-half gap was crucial. It gave Microsoft and Intel enough time to use the 945/GMA 950 as the primary "soak test" hardware for the Desktop Window Manager (DWM) and the WDDM driver stack.

Why the Timing Mattered
The "Longhorn" Pivot: By the time the GMA 950 launched in mid-2005, Microsoft had recently "reset" the development of Vista (codenamed Longhorn). They needed a stable, high-volume integrated graphics solution to prove that the Aero Glass effect could run on standard office PCs, not just high-end gaming rigs.

DirectX 9.0c Support: The GMA 950 was one of the first mass-market integrated chips to support the specific hardware requirements for Aero (specifically Pixel Shader 2.0).

The "Vista Capable" vs "Vista Premium Ready" Drama: This chipset was actually at the center of a famous lawsuit. Many 915-based PCs were sold as "Vista Capable," but they couldn't run the Aero interface. The 945/GMA 950 was the "real" entry point for the full Vista experience.


Gemini said
You're spot on—the GMA 950 was definitely in the wild well before Vista hit the shelves.

The Intel GMA 950 officially debuted on June 1, 2005, as part of the 945 Express Chipset family. For context, Windows Vista didn't reach "Release to Manufacturing" (RTM) until November 2006, and it didn't hit general availability until January 2007.

That year-and-a-half gap was crucial. It gave Microsoft and Intel enough time to use the 945/GMA 950 as the primary "soak test" hardware for the Desktop Window Manager (DWM) and the WDDM driver stack.

Why the Timing Mattered
The "Longhorn" Pivot: By the time the GMA 950 launched in mid-2005, Microsoft had recently "reset" the development of Vista (codenamed Longhorn). They needed a stable, high-volume integrated graphics solution to prove that the Aero Glass effect could run on standard office PCs, not just high-end gaming rigs.

DirectX 9.0c Support: The GMA 950 was one of the first mass-market integrated chips to support the specific hardware requirements for Aero (specifically Pixel Shader 2.0).

The "Vista Capable" vs "Vista Premium Ready" Drama: This chipset was actually at the center of a famous lawsuit. Many 915-based PCs were sold as "Vista Capable," but they couldn't run the Aero interface. The 945/GMA 950 was the "real" entry point for the full Vista experience.

Quick Timeline
May/June 2005: Intel 945G/P (GMA 950) launches.
July 2005: Windows Vista Beta 1 is released to developers (frequently tested on 945 systems).
January 2006: The mobile version (945GM) launches with the first "Centrino Duo" (Napa) laptops.
January 2007: Windows Vista officially launches to the public.

do you think that vista could have been more successfull if it had of been released a bit later like 6 months later when people had access to hardware that had the gpu power to support vista

Gemini said
It’s a classic "what-if" in tech history. While a six-month delay (pushing the launch to June/July 2007) would have placed Vista in a slightly better hardware environment, most historians and enthusiasts argue it wouldn't have saved the OS from its "failure" narrative.

The issues were deeper than just raw GPU power. Here is why a delay might have helped—and why it likely wouldn't have been enough:

1. The GPU "Sweet Spot" would have been better
By mid-2007, the hardware landscape was shifting in ways that favored Vista's heavy UI:

The Santa Rosa Platform: Intel released the "Santa Rosa" laptop platform (the 965 chipset/GMA X3100) in May 2007. This was the first integrated graphics chip that truly felt "at home" with Vista's Aero Glass.

DirectX 10 Maturity: NVIDIA’s mid-range 8-series cards (8600 GT, etc.) were more common by mid-2007. These were built specifically for Vista's driver model, whereas the cards available at launch (like the 7-series) were essentially legacy cards being "forced" to work with new drivers.

2. The "Vista Capable" PR Nightmare
A delay might have allowed Microsoft to kill off the "Vista Capable" marketing program. This was the program that allowed Intel to sell underpowered 915-chipset machines as "Ready for Vista," even though they couldn't run Aero. If Microsoft had waited 6 months, they could have mandated that all new PCs meet the "Premium Ready" spec, avoiding the massive consumer backlash when people realized their new "Vista" laptops looked exactly like Windows XP.

3. Drivers: The Real Culprit
The primary reason people hated Vista wasn't just slow GPUs—it was stability.

The 32-bit driver crash: At launch, nearly 30% of all Vista crashes were attributed to NVIDIA’s early drivers.

Creative Labs & Printers: Audio interfaces (like your Digi 002 or UAD cards) and printers were notorious for not having working drivers at launch.
A 6-month delay would have given third-party developers more time to finalize drivers, potentially preventing the "Vista is unstable" reputation that stuck for years.

Why it still might have failed
Even with better hardware, Vista faced two "unfixable" problems that a 6-month delay wouldn't have solved:

User Account Control (UAC): The constant security pop-ups were a fundamental design choice, not a hardware limitation. People hated the "nagging," and it took until Windows 7 for Microsoft to tune the frequency of those alerts.

The "Apple vs. PC" Marketing: In late 2006, Apple launched the "Get a Mac" (I'm a Mac/I'm a PC) ad campaign. By mid-2007, that campaign was at its peak. Vista had already become a punchline in pop culture; no amount of extra GPU power could easily change that public perception.

Summary
If Vista had launched in July 2007 (around the same time as the 965 chipset and Intel Core 2 Duo refreshes), it would have been technically superior and much more stable. However, by that point, the tech world was already looking toward the next big thing. In reality, the "fixed" version of Vista was exactly what we got two years later: Windows 7.



45
1. Brand: Albatron | Model: PX865PE775-G-V2.0 | Chipset: Intel 865PE | Max CPU: Core 2 Duo E6000 | RAM: DDR1;
2. Brand: Soltek | Model: SL-865-775 | Chipset: Intel 865PE | Max CPU: Pentium D / Early C2D | RAM: DDR1;
3. Brand: ASUS | Model: P5VDC-MX | Chipset: VIA P4M800 Pro | Max CPU: Core 2 Duo E6000 | RAM: DDR/DDR2;
4. Brand: ASRock | Model: 775i65G (R2.0) | Chipset: Intel 865G | Max CPU: Core 2 Duo E6700 | RAM: DDR1;
5. Brand: ASUS | Model: P5VDC-TVM | Chipset: VIA P4M800 Pro | Max CPU: Core 2 Duo E4000 | RAM: DDR2;
6. Brand: ASUS | Model: P5PE-VM | Chipset: Intel 865G | Max CPU: Core 2 Duo E6700 | RAM: DDR1;
7. Brand: ASRock | Model: 4CoreDual-VSTA | Chipset: VIA PT880 Pro | Max CPU: Core 2 Extreme QX6700 | RAM: DDR/DDR2;
8. Brand: ASRock | Model: ConRoe865PE | Chipset: Intel 865PE | Max CPU: Core 2 Duo E6700 | RAM: DDR1;
9. Brand: MSI | Model: P4M890M-L / V2 | Chipset: VIA P4M890 | Max CPU: Core 2 Duo E6000 | RAM: DDR2;
10. Brand: ECS | Model: P4M890T-M (V2.0) | Chipset: VIA P4M890 | Max CPU: Core 2 Duo E6000 | RAM: DDR2;
11. Brand: Foxconn | Model: P4M8907MA-RS2H | Chipset: VIA P4M890 | Max CPU: Core 2 Duo E6000 | RAM: DDR2;
12. Brand: ASRock | Model: 4CoreDual-SATA2 | Chipset: VIA PT880 Ultra | Max CPU: Core 2 Quad Q6700 | RAM: DDR/DDR2;
13. Brand: Biostar | Model: P4M890-M7 TE | Chipset: VIA P4M890 | Max CPU: Core 2 Duo E6700 | RAM: DDR2;
14. Brand: Gigabyte | Model: GA-8I865GME-775-RH (Rev 6.6) | Chipset: Intel 865G | Max CPU: Core 2 Quad Q6600 | RAM: DDR1;
15. Brand: ASRock | Model: 775i65G (R3.0) | Chipset: Intel 865G | Max CPU: Core 2 Duo E8600 | RAM: DDR1;
46
General / Miscellaneous / what is PCIE multiplexing?
« Last post by chrisNova777 on April 07, 2026, 09:07:25 PM »
Gemini
PCIe Lane Multiplexing Explained
Conversation with Gemini
You said
elaborate on  PCI Express Lane Multiplexing
Gemini said
PCI Express (PCIe) Lane Multiplexing is a sophisticated method used to manage and distribute data paths within a computer's architecture. It allows multiple devices to share a limited number of PCIe lanes or enables a single set of lanes to be split into various configurations.

At its core, multiplexing is about resource efficiency. Because the CPU and chipset have a finite number of physical PCIe lanes, multiplexing ensures that high-bandwidth components can coexist without needing a dedicated direct line for every single peripheral.

Core Mechanisms of PCIe Multiplexing
Multiplexing generally occurs through two primary methods: PCIe Switches and Bifurcation.

1. PCIe Switching (Active Multiplexing)
A PCIe switch acts like a high-speed network router for data packets. It takes a "downstream" set of lanes from the CPU and expands them into a larger number of "upstream" ports for devices.

Fan-out: A switch can take 16 lanes and turn them into 32 or more. While this doesn't increase the total available bandwidth (the "pipe" to the CPU remains 16 lanes wide), it allows more devices to be connected simultaneously.

Packet Switching: The switch manages data traffic by directing packets to the correct device. If Device A is idle, Device B can utilize the full bandwidth of the common link.

Latency: Because the switch must process and route packets, it introduces a very small amount of electrical latency compared to a direct connection.

2. PCIe Bifurcation (Passive Multiplexing)
Bifurcation is a simpler, hardware-level split of a single PCIe slot. It allows a large "link" (like an x16 slot) to be divided into smaller independent links (like four x4 links).

Logic-Based: Unlike a switch, bifurcation doesn't use a dedicated chip to route packets. Instead, the CPU or chipset is configured to treat the physical pins of one slot as multiple logical controllers.

Zero Latency: Since there is no intermediary chip processing data, there is no added latency.

Hardware Requirement: This requires support from both the motherboard BIOS/firmware and the physical trace layout. A common use case is a "Quad M.2" adapter card that plugs into one x16 slot but runs four separate NVMe drives.

The Role of the Multiplexer (MUX) Chip
In specific hardware designs—most notably in laptops or specialized workstations—MUX chips are used to physically toggle the electrical path of a PCIe signal between two different controllers.

A classic example is a laptop with both integrated graphics (iGPU) and a discrete graphics card (dGPU). A MUX chip can physically switch the internal display's connection:

Efficiency Mode: The display is connected to the iGPU to save power.

Performance Mode: The MUX chip reroutes the PCIe signal so the dGPU talks directly to the display, bypassing the iGPU's "frame buffer" to reduce latency and increase frame rates.

Why Multiplexing is Essential
Bandwidth Management
Modern systems often have more hungry devices (NVMe drives, 10GbE network cards, GPUs) than the CPU can natively support. Multiplexing allows the system to prioritize active devices.

Expandability
Without multiplexing, a standard consumer CPU with 20-24 lanes would be limited to one GPU and perhaps two M.2 drives. Multiplexing via the motherboard chipset (which acts as a giant PCIe switch) allows for additional USB controllers, SATA ports, and audio hardware to share a smaller "DMI" link to the CPU.

Performance Balancing
In multi-GPU setups, multiplexing ensures that even if the CPU only has 16 lanes, two GPUs can both "see" an x16 connection. While they share the 16-lane bandwidth to the CPU, the peer-to-peer communication between the cards can happen at the higher lane count provided by the switch.





Gemini is AI and can make mistakes.
47
OsX86 Hackintosh (Intel / AMD) / why did apple choose the 975X chipset?
« Last post by chrisNova777 on April 07, 2026, 09:04:36 PM »
why did apple pick the 975X chipset + ICH7R ? for the mac pro original platform?
Gemini said
The choice of the Intel 975X Express Northbridge and the ICH7R Southbridge for the original Mac Pro (the "MacPro1,1") in 2006 was a strategic compromise between high-end workstation performance and consumer-grade compatibility during the critical PowerPC-to-Intel transition.

While contemporary PC workstations were moving toward the Intel 5000X (Greencreek) chipset, Apple took a unique path that blended parts of the enthusiast desktop world with server-grade hardware.

1. The Workstation Gap: 975X vs. 5000X
At the time, the "correct" choice for a dual-Xeon system was the Intel 5000X chipset. However, the 5000X was designed primarily for servers and featured high power consumption and heat output.

By selecting the 975X, Apple gained several advantages:

Lower Latency & Heat: The 975X was a high-end desktop/enthusiast chipset. It was more efficient and ran cooler than the 5000-series server chipsets, allowing Apple to maintain the quiet acoustic profile they promised for the Mac Pro.

PCI Express Versatility: The 975X was one of the first chipsets to support dual x8 graphics or a single x16 slot natively, which aligned with Apple’s goal of making the Mac Pro a flexible machine for both video editors (who needed high-end GPUs) and PCIe expansion users.

2. The ICH7R Southbridge: Legacy & Stability
The ICH7R (I/O Controller Hub 7) was the "Southbridge" that handled the computer’s slower communication tasks (SATA, USB, PCI, Audio).

SATA II Maturity: The ICH7R provided four native SATA 3Gb/s ports. This allowed Apple to implement the "cable-free" drive bay system found in the Mac Pro 1,1 without needing complex third-party controllers that might have required unstable drivers during the transition to macOS Tiger (Intel).

Tiger & Leopard Compatibility: Because the ICH7R was a widely used and stable component in the PC world, it was much easier for Apple's engineers to write stable kexts (drivers) for it during the rush to get Mac OS X 10.4.7 ready for the Intel launch.

AHCI Support: The "R" variant (RAID) supported Advanced Host Controller Interface (AHCI), which was crucial for the performance of the SATA drives and the "hot-swapping" feel of the Mac Pro's internal drive sleds.

3. The "Frankenstein" Architecture
The Mac Pro 1,1 was technically an architectural hybrid. It used:

Server-grade CPUs: Two Intel Xeon "Woodcrest" 5100-series processors.

Server-grade Memory: Fully-Buffered DIMMs (FB-DIMMs) with ECC support.

Enthusiast-grade Chipset: The 975X/ICH7R combo.

This combination allowed Apple to offer the ECC memory protection and multi-socket processing required by pros, while using a chipset that was easier to tune for the specific "instant-on" and sleep-wake behaviors expected of a Mac.

Why not use the ICH8R?
By the time the Mac Pro launched in August 2006, the ICH8R was already appearing in PC motherboards. However, Apple chose the ICH7R likely due to validation cycles. Apple had been testing "Marklar" (the secret Intel version of Mac OS X) on ICH7-based developer transition kits (the famous Pentium 4 DTKs) for a year. Sticking with the ICH7R ensured that the first-generation flagship was as stable as possible on day one.

Note for your projects: If you are still working on your "Ultimate Tiger" build, this is exactly why the 975X/ICH7R era is so nostalgic—it represents the peak "Golden Era" of compatibility where high-end PC hardware and early Intel macOS were perfectly aligned.

You said
why didnt they choose the P965?
Gemini said
Choosing the P965 Express chipset (released just before the Mac Pro in 2006) might seem logical because it was "newer," but for a workstation like the Mac Pro, it was a non-starter for several technical and professional reasons.

While the P965 was a massive hit for the consumer "Core 2 Duo" desktop market, it lacked three critical "Pro" pillars that the 975X provided.

1. The ECC Memory Requirement
This was the primary dealbreaker. The Mac Pro was designed to use ECC (Error Correction Code) memory to prevent system crashes during long video renders or scientific calculations.

975X: Supported ECC memory natively. This allowed Apple to bridge the gap between a desktop-class chipset and the server-grade Xeon processors.

P965: Was a purely "mainstream" consumer chipset. It did not support ECC memory. Had Apple used the P965, the Mac Pro would have been prone to the "bit-flip" errors that professional workstations are designed to avoid.

2. PCI Express Lane Multiplexing
The original Mac Pro was marketed for its expansion capabilities, particularly for high-end graphics and RAID cards.

975X: Could split its 16 PCIe lanes into two x8 slots (or a single x16). This was "native" to the Northbridge, ensuring high-bandwidth performance for dual-GPU setups or high-speed fiber channel cards.

P965: Could only do a x16 / x4 configuration. The second "long" slot on P965 boards usually ran through the Southbridge at a significantly throttled x4 speed, which would have been an embarrassing bottleneck for a "Pro" machine.

3. Native IDE (PATA) Support
While it sounds like a small detail now, the transition era of 2006 still relied heavily on IDE for optical drives (DVD burners).

975X + ICH7R: The ICH7R Southbridge included a native IDE controller. Apple used this to power the SuperDrives in the top optical bays without needing a third-party chip.

P965 + ICH8R: Intel famously dropped native IDE support with the ICH8 series. PC motherboard manufacturers had to add "janky" third-party chips (like JMicron controllers) to provide IDE ports, which often caused driver instability and "Kernel Panics"—something Apple would never have tolerated in a flagship product.

4. Validation and the "Transition" Timeline
Apple's "Developer Transition Kits" (the modified PowerMac G5s given to developers to prepare for Intel) were essentially built on the 945G/ICH7 architecture.
The 975X was an evolution of that 945/955 lineage. By sticking with the 975X, Apple’s software team didn't have to rewrite the core "plumbing" of macOS for a brand-new, unproven P965 architecture. They chose the stable, mature, high-bandwidth option over the "new and shiny" consumer one.
48
 It’s the computer that Apple should have built + supported for a decade, but didn't.

EXACTLY
49
hey its nice when things WORK

Quote
why am i obsessed with creating the ultimate workstation compatible with TIGER OSX? supporting the best graphics + disk access transfer speeds possible makes my music software work faster and that supports my creative process for making music with the authentic tools that were available in 2006. also thre was so so much confusing that arose when they discontinued the powermac g4 series + the whole fact that powermac g5's couldnt run os9 and then the abrupt switch to intel processors it left alot of people with a bad taste in their mouths re: compatibility and also what to actually do with the g4s + g5 computers we had spent alot of money on that were suddenly deemed obsolete over night midway thru their worth + lifecycle!

It sounds like you are tapping into a very specific and justified form of "technological closure." The mid-2000s were a chaotic era for Apple users, and building the "Ultimate Tiger" machine is essentially a way to fix the broken promises of that transition period.

Here is a breakdown of why that particular obsession makes so much sense from both a technical and creative standpoint.

1. Resolving the "Transition Trauma"
The period between 2003 and 2006 was arguably the most volatile in Mac history. You had three massive shifts happening simultaneously:

The OS 9 Break: When the PowerMac G5 launched, it dropped the ability to boot into OS 9 natively. For professionals with expensive MIDI setups and plugins, this was a massive "planned obsolescence" hit.

The Architecture Shift: Moving from PowerPC (PPC) to Intel meant that even the high-end G5s people had just spent $3,000 on were effectively dead-ends within three years.

The Software Gap: Tiger (10.4) was the "bridge" OS—the only one that lived comfortably in both the PPC and Intel worlds.

By building a high-performance Tiger workstation now, you are essentially creating the "missing link"—a stable, finalized version of a system that was cut short by corporate pivots.

2. Creative Flow vs. Modern Friction
There is a legitimate argument for using 2006-era tools for music production.

Zero Distractions: A Tiger-era machine isn't constantly pinging you with iCloud notifications, OS update prompts, or subscription-model software nagging.

DSP and Latency: Professional audio cards from that era (like the Digi 002 or Delta 410) were designed to talk directly to the hardware. On a modern system, layers of "security" and driver abstraction can actually introduce micro-latencies that weren't as prevalent in a well-tuned legacy setup.

The "Instrument" Factor: When your DAW (like Reason 3) and your OS are locked in a specific version, the computer stops being a "general-purpose device" and becomes a dedicated instrument. You learn its quirks and its ceiling, which often forces more creative decision-making.

3. The "Tiger" Performance Sweet Spot
Tiger (10.4.11) is widely considered one of the leanest, fastest versions of macOS ever released. It lacked the heavy "eye candy" and background metadata indexing (Spotlight was brand new and much lighter) that started to bloat the OS in Leopard and beyond.

Using modern hardware (Intel Core 2 Duo E8600, SATA SSDs via PCIe with SATA2 disk speeds) to run an OS that was originally struggling to run on G4 chips with IDE drives creates a "God Mode" experience.

 You’re getting:

Instantaneous UI: Windows and menus snap open because the hardware is overkill for the software's requirements.

Maximized Throughput: Using PCIe-based SATA controllers (like the Sil3132) allows you to hit the ceiling of what the software can actually process, ensuring that disk-heavy sample libraries load faster than they ever did on a 2006-era mechanical drive.

Building this isn't just about nostalgia; it’s about optimization. You are taking the best software environment from a specific era and finally giving it the "lungs" (hardware) it needs to breathe properly. It’s the computer that Apple should have supported for a decade, but didn't.

amen to that
Gemini said
It really is the ultimate "tech justice" project. There is something incredibly satisfying about taking a operating system that was once held back by hardware transitions and finally letting it run at full throttle.

Since you're bridging that 2006 era with much faster modern components, are you finding that the older plugins and DAW versions handle the extra "headroom" gracefully, or do you have to throttle things back to keep the timing sample-accurate?
50
this is a list of motherboards that have SATA controllers that are supported by OSX TIGER to unleash the full ACHI + NCQ + Hot plug features of SATA II. motherboards with controllers older than these will only be used at SATA1 speed + tech. some newer wont work aswell it only works with specific types of ICH controllers.

01. Abit AW9D-MAX (975X + ICH7R) ATX;
02. Abit AL8 (945P + ICH7R) ATX;
03. ASUS P5W DH Deluxe (975X + ICH7-DH) ATX;
04. ASUS P5WD2 Premium (955X + ICH7R) ATX;
05. ASUS P5WD2-E Premium (975X + ICH7R) ATX;
06. ASUS P5LD2 Deluxe (945P + ICH7R) ATX;
07. ASUS P5LD2-VM DH (945G + ICH7-DH) Micro-ATX;
08. ASUS P5W64 WS Professional (975X + ICH7R) ATX;
09. ASUS P5M2-M/C (Intel 3000 + ICH7R) Micro-ATX;
10. DFI LanParty UT iC7975X (975X + ICH7R) ATX;
11. Gigabyte GA-G1975X (975X + ICH7R) ATX;
12. Gigabyte GA-8I955X Royal (955X + ICH7R) ATX;
13. Gigabyte GA-8I955X Pro (955X + ICH7R) ATX;
14. Gigabyte GA-8I945P Pro (945P + ICH7R) ATX;
15. Gigabyte GA-8I945GMF (945G + ICH7R) Micro-ATX;
16. Gigabyte GA-8I945GMF-RH (945G + ICH7R) Micro-ATX;
17. Intel D975XBX "Bad Axe" (975X + ICH7R) ATX;
18. Intel D975XBX2 "Bad Axe 2" (975X + ICH7R) ATX;
19. Intel D955XBK (955X + ICH7R) ATX;
20. Intel D945GTP (945G + ICH7R) Micro-ATX;
21. Intel D945PWM (945G + ICH7-DO) Micro-ATX;
22. Intel D945PSN (945P + ICH7R) ATX;
23. MSI 975X Platinum (975X + ICH7R) ATX;
24. MSI 955X Platinum (955X + ICH7R) ATX;
25. MSI 945P Neo2-DH (945P + ICH7-DH) ATX;
26. MSI 945GZM5 v1.0 (945G + ICH7R) Micro-ATX;
27. Supermicro PDSML-LN2 (Intel 3000 + ICH7R) Micro-ATX;
28. Supermicro PDSME (Intel 3000 + ICH7R) ATX;
29. Supermicro PDSM4+ (E7230 + ICH7R) ATX;

these specific boards would be extremely usefull for advanced users as it leaves more slots open for further expansion purposes
Pages: « 1 2 3 4 5 6 7 8 9 10 »