Comparing Architectures: VAX, Alpha, Itanium and X86-64 (OpenVMS Boot Camp 2017)

Ғылым және технология

An introduction to the X86-64 Architecture, and a comparison of this architecture with the VAX, Alpha, and Itanium architectures that OpenVMS currently runs on. Delivered at the OpenVMS Boot Camp 2017 in Westford, MA.

Пікірлер: 84

  • @lawrencedoliveiro9104
    @lawrencedoliveiro91043 жыл бұрын

    26:34 Fun fact: the original 32-bit ARM architecture had predicates (conditional execution) on its instructions too. That was done away with in 64-bit ARM, which goes back to conditional branching.

  • @arm-power

    @arm-power

    2 жыл бұрын

    1) Yes, AArch64 abandoned 4-bit predication present in AArch32. Those 4-bits were used to extend registers from 16 to 32 (for 3-operand instruction it's 1-bit more hence doubling registers, plus one extra bit for extension like TME, SVE etc.). ARM find out that for modern our-of-order CPU with advanced predictors the predication has no benefit in performance or energy consumption. It only consumed precious OPcode. 2) I miss comparison of memory model - weak (ARM, Alpha, IA64) vs. stong (x86) model. This is more important for OoO and high IPC than number of architectural registers or predication. Advanced modern memory models allow fencing and support many kinds of LD and ST instructions helping OoO CPU to determine purpose of ST - lack of registers temp value unload to L1 cache or ST which is used among other CPU cores. Obviously this is HUGE difference for OoO execution and ARM, Alpha and IA64 provide all necessary info. While x86 with strong memory model has to determine on the fly - extra transistors, extra heat, extra cost to overcome old ISA disadvantages. 3) Another thing is decoding problem. Apple M1 can decode 8 instructions per cycle - no problem with fixed 32-bit width of RISC instructions. How to load 8 instructions of variable width x86 CISC when you know beginning of 1st one only, the 2nd instr could start between 2nd byte and 16th byte.... the 8th instr can start between 8th byte and 106th byte etc. Modern x86 uses PREDICTORS even for decode of 4 instrucions per cycle. Predictors can fail, cost huge amount of transistors, cost of development and consume unnecessary energy. RISC decode is lean and super efficient. CISC x86 can overcome disadvantages of old ISA but it adds transistors which simply should not be there. That is the secret of inefficiency and low IPC of x86. Intel Golden Cove core has 6 decoders now finally - something Apple has for years in much smaller core while having significantly higher IPC than Golden Cove. Apple M2 will move to probably 10-instruction decode per cycle. x86 CPUs cannot. Unless x86 will introduce TAG instruction pointing at beginning on every 4 instruction block (kind of Itanium VLIW block or hybrid between RISC/CISC). But this would add extra 2 bytes per every 4 instr resulting in one of main CISC x86 advantages, the better code density over RISC (ARM 64 binary has about +16% size in compare to x86-64), will be gone. And still x86 would need those power hungry predictors for those 4 instructions between TAGs. So x86 ISA can be kept alive for next 20 years in terms of performance. But the price of all these extra work-arounds is huge and still growing. Something like steam machine in daily car - sure with todays electronics can be done but it does not make sense due to efficiency and higher cost. Especially when only two companies have license for that old garbage which is greatly limiting the competition. Sooner x86 dies, sooner customers will enjoy lower prices, lower power consumption and higher performance from RISC machines (ARMv9 with 2048-bit SIMD SVE2 and matrix SME, or alternatively RISC-V).

  • @williamdavidwallace3904
    @williamdavidwallace39042 жыл бұрын

    RISC machines like Power from IBM are quite different than what ALPHA implements. East coast RISC vs west coast RISC.

  • @lawrencedoliveiro9104
    @lawrencedoliveiro91043 жыл бұрын

    6:32 Early VAXes did have a PDP-11 compatibility mode -- that’s what the top bit in the PSL was for -- PSL$M_CM (how on earth did I remember that). I think it was only the models with names beginning “VAX-11” that had that feature.

  • @nickm8134

    @nickm8134

    2 жыл бұрын

    If my old DECie memory serves me, VAX-11/725, 730, 750, 751, 780, 782, 785, and the VAX 8600 (which was originally going to be called the VAX-11/790), had PDP-11 hardware/firmware compatibility mode. This allowed RSX utilities and compliers to run on the early VAXs and provided for cross-development of RSX-11 and MicroRSX software on VAX and migration from RSX-11 to VMS. RSX-11 Application Migration Executive - AME - provided an RSX-11 like environment and a sub-set of RSX development utils to enable this. RSX-11 AME relied on hardware compatibility mode. From VMS 4.0 onwards (1984) I think all utils and compilers had been migrated to native VMS. RSX-11 AME was superseded by the VAX-11 RSX layered product which could run in either hardware compatibility mode (on VAX-11s/8600) or software emulation mode (e.g. MicroVAX and all future VAXs beyond VAX-11s). The glory days!

  • @CamielVanderhoeven
    @CamielVanderhoeven5 жыл бұрын

    In slide 24, everyone who wonders what a "PMD" is, that's a typo; it's meant to read "PMC", Performance Monitoring Counter

  • @alanrollychavezarancibia5514

    @alanrollychavezarancibia5514

    2 жыл бұрын

    Donde se puede descargar ISO usb o DVD de OpenVMS x86-64 para workstations ?

  • @tiagodeaviz
    @tiagodeaviz5 жыл бұрын

    I've always liked learning about different architectures. VAX is really something I've never seen, but I'm a UNIX nerd. This video helped me learn the differences of each of these architectures and I'm really glad I found it. Thanks for the excellent presentation.

  • @jq747
    @jq7475 жыл бұрын

    I shuddered at remembering the hateful segmented memory system in the pre-32-bit days. Near call.. far call.. small model.. large model.. huge model.. sorry 64k is all you got.. or is it 1Mb.. Arrrrgh!!!

  • @ArneChristianRosenfeldt

    @ArneChristianRosenfeldt

    2 жыл бұрын

    Why would you need to write that manually. So we got a talk about Itanium without a piece of compilers, but VRAM. So the idea is that memory allocation leads to fragmentation. So it could be that you cannot place large chunks of code anywhere. Thus you are supposed to cut your code into small pieces. Memory is expensive. So you have DLLs with < 64 kB code and load those needed right now from HDD into RAM. Von Neumann: Shared RAM for Code and Data. So you either can scroll through your whole text or have printer spooler loaded. For data you are supposed to use a b-tree and swap HDD-sectors into RAM-segments. Images on 8-bit computer where already organized in tiles .. a two level 2d tree. Unfortunately, on PC we got bitplanes for graphics. 48 kS 8 bit audio ( back in the day ) fit into a segment. More importantly there should have been a VSYNC interrupt on all PCs. Not just CGA and then again in Windows times. Full text search on a b-tree probably needs C++ to be readable.

  • @xenophore
    @xenophore4 жыл бұрын

    I'd love to see how the ARM architecture compares, especially as it's currently possible to run VMS on a Raspberry Pi using QEMU or another emulator.

  • @stumpybear60

    @stumpybear60

    4 жыл бұрын

    ARM is supposed to be the future as that architecture is much more power efficient and the chips are getting more powerful with each developer’s advancements.

  • @niks660097

    @niks660097

    2 жыл бұрын

    @@stumpybear60 ARM is just an ISA not an architechture, and you didn't even watch the talk, he specifically talked about re-ordering before micro-ops conversion so a CISC still can do more work with less instructions..

  • @lawrencedoliveiro9104
    @lawrencedoliveiro91043 жыл бұрын

    12:39 Particularly the vector extensions: MMX, SSE, AVX etc. They produce a combinatorial explosion in the number of instructions. Whereas the up-and-coming RISC-V architecture adopts an approach that harks back to the original Cray supercomputers, which had “long vectors” (variable-length up to 64 elements), instead of the short, fixed-length vector instructions popular today. An explanation of the reason why is here: www.sigarch.org/simd-instructions-considered-harmful/ .

  • @lawrencedoliveiro9104
    @lawrencedoliveiro91043 жыл бұрын

    35:50 There was also indexed mode, where a byte specifying the index register (and multiplication factor, as I recall) preceded the operand-descriptor byte.

  • 2 жыл бұрын

    You have some really cool stuff and knowledge thats for sure! Been binge watching your supercomputer vids and now the Alpha/X86 etc video. I my self do have an Alpha EV56 600Mhz still running, it wants to crash to SRM with linux nowadays and never figured out the problem after 5 years of trying. Runs NT 4.0 now (yea kind of sucks but at least its running) and I compiled Quake2 and my own moded client for it and stuff so I can play Quake 2 with mods online with friends, LOVE the Alphas! Create talks, will watch more!

  • @LossyLossnitzer
    @LossyLossnitzer4 жыл бұрын

    A very valuable presentation - Thank you

  • @lowellturner7012
    @lowellturner70126 жыл бұрын

    Watching this makes me sad having to give up my VAXstation (and Unisys U/5000) when I left Virginia along with a number of other machines.

  • @briancase6180
    @briancase6180 Жыл бұрын

    So, you can't put an asterisk after CISC for x86 because x86 is an architecture not a microarchitecture. The implementation does not have any effect on how an architecture is classified.

  • @KingOfHighFives
    @KingOfHighFives2 ай бұрын

    As a UNIX / Linux nerd, I've always found VAX/VMS architecture and OS respectively fascinating! I'd love to get into VMS but not sure how to run this at on x86

  • @lawrencedoliveiro9104
    @lawrencedoliveiro91043 жыл бұрын

    19:49 Is it worth separating out the transistor count into “scalable/repetitive” (for e.g. caches) versus “random logic” (function units, controllers, sequencers etc)? I suspect the former would dominate the count, while the latter may give more insight into the innate complexity of the chip.

  • @marcovtjev

    @marcovtjev

    2 жыл бұрын

    ... and specially consumer SKUs also feature a GPU. Btw fun to see this lecture by Camiel while Apple announced a 114Billion transistor processor last week (ok, a joined chip, but still)

  • @amkhrjee
    @amkhrjee Жыл бұрын

    This is immensely educational for an undergrad like me. It would be awesome if you could provide the slides for later reference (of course, if that isn't a problem). Thanks a lot for uploading this here.

  • @lawrencedoliveiro9104
    @lawrencedoliveiro91043 жыл бұрын

    22:10 One classic example of the performance drawbacks with CISC is this: consider how to save registers R0-R5 on the stack. The VAX has a very concise way to write this PUSHR #^M which is just 2 bytes, versus the more long-winded PUSHL R5 PUSHL R4 PUSHL R3 PUSHL R2 PUSHL R1 PUSHL R0 which is 12 bytes long. Guess which one is faster?

  • @herrbonk3635

    @herrbonk3635

    2 жыл бұрын

    That's no drawback "of RISC" per se, just a sloppy implementation. Either badly written micro code or to little hardware to do it efficiently on that particular version of the VAX machine (that RISC proponents loves to talk about). Compare the string (or repeat) instructions on x86 or Z80. Pretty slow on the orignal 8086/8 and Z80 but efficient on the Pentium and eZ80, "despite" still being based on microcode. I'm no fan of the 68K family or anything, but iirc, the same holds for "push/pop multi-regs" on the 68030/40 versus the older 68000/10/08.

  • @oisnowy5368

    @oisnowy5368

    2 жыл бұрын

    Would have to know the architecture to answer that one. If the single instruction has to handle all sorts of memory errors than I can easily imagine it taking longer. But it is not a CISC drawback. The ARM originally had a single instruction to write multiple registers as well.

  • @ArneChristianRosenfeldt

    @ArneChristianRosenfeldt

    Ай бұрын

    @@oisnowy5368which makes sense because load store blocks the instruction fetch from the common memory ( cache on GBA ). Did Intel have a patent on instruction queues? Is ARM supposed to be simple? Does ARM have post and pre increment and decrement to do something for two cycles? JRISC added reg-mem instructions to keep the CPU busy ..

  • @DaisakuIkeda-nd6en
    @DaisakuIkeda-nd6en6 ай бұрын

    Alpha Digital new processor 21364A is fantastic, good frequency and higher ipc from 21264A.

  • @lawrencedoliveiro9104
    @lawrencedoliveiro91043 жыл бұрын

    28:50 Why couldn’t you implement a VAX processor the same way? Because the market for x86 is several orders of magnitude larger, which means a correspondingly larger monetary investment available to make it work. There is no technological issue, only an economic one.

  • @FilipiVianna
    @FilipiVianna3 жыл бұрын

    At 48:58 you compare architectures, but has not included ARM and RISC-V. I see that talk was back in 2017, but how do you see VMS today, considering the current state of development on ARM and RISC-V and the recent ARM aquisition by NVidia?

  • @lawrencedoliveiro9104
    @lawrencedoliveiro91043 жыл бұрын

    47:01 The MIPS bar has been cut short. Sales continue right through today. In fact, MIPS chips outsell x86 by about 3∶1.

  • @JoseJimeniz

    @JoseJimeniz

    2 жыл бұрын

    I don't imaging places where MIPS chips are running (cars, robots, televisions), will be running OpenVMS.

  • @lawrencedoliveiro9104

    @lawrencedoliveiro9104

    2 жыл бұрын

    This is not about running OpenVMS. MIPS never ran OpenVMS.

  • @JoseJimeniz

    @JoseJimeniz

    2 жыл бұрын

    @@lawrencedoliveiro9104 this video was advocating developing open VMS to another CPU platform. They do not need to think about MIPS.

  • @lawrencedoliveiro9104

    @lawrencedoliveiro9104

    2 жыл бұрын

    @@JoseJimeniz So why did they mention it?

  • @JoseJimeniz

    @JoseJimeniz

    2 жыл бұрын

    @@lawrencedoliveiro9104 It was a survey of all for chips. Explaining their architecture, and then saying why we shouldn't port to that one. > "I really believe that there's no way open VMS has a future if we don't get it to run on x86 because this is the architecture of choice whether we really like it or not. This is just the world we have to live in." He surveyed other chips that they had absolutely no intention of advocating porting OpenVMS to. It was a survey of CPU architectures, including ones that are useless for open VMS, acknowledging the x86 has strange craft going back 50 years, and now it's time to port VMS to that one. Why did he mention mips if he had no intention of advocating porting the VMS to it? Same reason he mentioned other architectures that he had no intention of advocating porting VMS to it.

  • @VioletGiraffe
    @VioletGiraffe2 жыл бұрын

    @10:19 Coffee Lake is mistakenly marked as a "tick", it's the same uarch as Kabylake before it.

  • @diegonayalazo
    @diegonayalazo2 жыл бұрын

    Thanks Camiel

  • @stefankral1264
    @stefankral12642 жыл бұрын

    Skylake adds some protection against the effects of cosmic rays? Sounds right to me:)

  • @jj74qformerlyjailbreak3
    @jj74qformerlyjailbreak3 Жыл бұрын

    So your telling me that a DEC/Harris J11 process has a floating point? And could possibly be the first.

  • @jimcameron6803
    @jimcameron68032 жыл бұрын

    "x86 is the architecture of choice, whether we like it or not." That sums it all up, really.

  • @EnricoSilterra
    @EnricoSilterra2 жыл бұрын

    What about the iApx 432?

  • @rabidbigdog
    @rabidbigdog3 жыл бұрын

    A little part of me dies inside, every-time I hear Alpha. Killing it should have been capital crime.

  • @bbuggediffy
    @bbuggediffy5 жыл бұрын

    Maybe release it with a free license as well. Don't be stupid guys ... Great presentation.

  • @AmauryJacquot
    @AmauryJacquot5 жыл бұрын

    and I think the future is in ARM and RiscV

  • @CamielVanderhoeven

    @CamielVanderhoeven

    5 жыл бұрын

    We're certainly following that closely. I see a lot of potential in RiscV, but it's nowhere near ready for VMS to run on it, and ARM in the datacenter doesn't seem to be happening (yet). I doubt x86 will be the last pkatform we port to.

  • @tiagodeaviz

    @tiagodeaviz

    5 жыл бұрын

    @@CamielVanderhoeven And there's OpenPOWER too.

  • @lawrencedoliveiro9104

    @lawrencedoliveiro9104

    3 жыл бұрын

    @@CamielVanderhoeven The most powerful computer in the world right now runs on ARM.

  • @rabidbigdog

    @rabidbigdog

    2 жыл бұрын

    If you haven't, I recommend watching Part 2 of Dave Cutler's oral history from the Computer History Museum where he explains why RISC went away. I'd be extremely surprised if RISC can make a comeback - it's a stupid differentiation now anyway. As Cam explains, the iAMD64 is RISCy anyways.

  • @AmauryJacquot

    @AmauryJacquot

    2 жыл бұрын

    @@rabidbigdog I did watch that, and the "risc went away" part was the best joke of the century...

  • @andrewlankford9634
    @andrewlankford96344 жыл бұрын

    Why oh why has x86 been around for 40+ years? Does anyone happen to know? Gosh.

  • @Elios0000

    @Elios0000

    3 жыл бұрын

    momentum. same reason businesses are STILL using windows 98 and XP. it costs money to re write software and re-validate it. but with more powerful ARM chips and the return of slim clients now with the help of remote VMs things are slowly changing and it looks like the future will be ARM

  • @rabidbigdog

    @rabidbigdog

    3 жыл бұрын

    Lessons from IBM - compatibility triumphs over innovation in computing - as it should.

  • @jonathanvanier

    @jonathanvanier

    2 жыл бұрын

    Economics more than anything. Market size translating to an ability to invest the necessary and exponentially increasing development costs of new CPUs. Market size is ultimately what doomed the superior competing platforms (Alpha, SPARC, MIPS, etc.). And if not for missing the boat on the soon to be massive mobile phone market (Apple did approach Intel first for the iPhone's CPU, but they declined), ARM (and TSMC/Samsung) would never have developed the market size needed to compete with Intel on performance.

  • @PEGuyMadison
    @PEGuyMadison3 жыл бұрын

    Intel bought the Alpha dispatch technology from DEC and integrated it into their processors ages ago, it's a much different CISC processor than one would think. With this technology the x86 performance went far beyond what was capable RISC was capable of... and RISC became clock limited

  • @herrbonk3635

    @herrbonk3635

    2 жыл бұрын

    Are you saing the P6 (Pentium Pro/II/III) was based on some Digital patents? Don't forget that the 486/Pentium were as fast as RISC processors too, already in 1989/92. Cyrix did the same, a while later, but in a more extreme way (implementing speculative execution via register renaming in a fixed dual pipeline). On the next level, AMD had fully dynamic microcode translation and dispatch in the K5, K6 and K7/K8/Athlon, just like in the Intel P6 of 1996.

  • @PEGuyMadison

    @PEGuyMadison

    2 жыл бұрын

    @@herrbonk3635 Yes 486's were very fast at the end. Intel did delay the original Pentium because the 486 family was so close in performance and the P5 ran really hot. But the real threat was PowerPC, so Intel stockpiled Pentiums and dumped them to flood the market which killed the PowerPC. But Intel bought the dispatch patents from Dec which allowed them to address more functional units each cycle through the instruction dispatch technology that Dec developed for the Alpha. Dell computer actually had a server in the lab ready for market based on the Alpha technology, but it was canned since Intel would have withheld supply to Dell for selling both Intel and Alpha.

  • @herrbonk3635

    @herrbonk3635

    2 жыл бұрын

    @@PEGuyMadison What patent? Do you say that the difference between the tightly pipelined 486 and Pentium on one side and the dynamic dispatch P6 and K7 on the other was DEC's invention? If Intel really bough any patents from DEC, then AMD would need the same for it's K5, K6, K7, K8.

  • @herrbonk3635

    @herrbonk3635

    2 жыл бұрын

    @@PEGuyMadison Decoupled execution units were used already on the 8086, 286, 386, before the tightly pipelined, 486 and Pentium. So what exactly was unique with the Prism or Alpha design? Speculative execution via register renaming? Cyrix implemented that as well, in the M1 ("6x86"). Or was it the _dynamic_ and buffered translation to microcode that Intel and AMD uses that was the thing in that patent?

  • @activelow9297
    @activelow92972 жыл бұрын

    Can't kill the x86... It's going to live forever! Hahahahaha!

  • @lawrencedoliveiro9104
    @lawrencedoliveiro91043 жыл бұрын

    17:21 Basically, any new company that starts up trying to make x86-compatible chips is going to get sued out of existence by Intel.

  • @user-ge4uk9ui8y
    @user-ge4uk9ui8y2 жыл бұрын

    Microsoft internally calls 64 bit x86 as AMD64

  • @scharkalvin
    @scharkalvin2 жыл бұрын

    You left out AMD Threadripper

  • @rabidbigdog

    @rabidbigdog

    2 жыл бұрын

    The talk is from 2017.

  • @Elios0000
    @Elios00003 жыл бұрын

    yeah x86 has hung on because businesses are SUPER cheap about upgrading there systems and dont want to have rewrite there software. even getting them off of XP and 98 has been a HUGE pain in the ass i dont even want to think about getting the to move to another architecture that would need a new OS and all new software. good news is ARM now is making in roads so its looking like the far future will be ARM

  • @SmithKerona

    @SmithKerona

    3 жыл бұрын

    Why in the world would businesses need to be forced to upgrade just because there is a new architecture? Lets face it most business applications don't need the computing power that is present in modern cpu architectures. For example I currently work for a utility that still has vast amount of 6809 based computer systems that control and collect remote data for telemetry and SCADA. These remote computer systems run on mere 2MHz clock frequency and communicate over serial channels at 1200 baud with the master station. The system works flawlessly and doesn't need any upgrade. Now why should this utility be forced to upgrade to a modern processor with new architecture? There should always be a business case to upgrade...not because some CPU manufacturer or software house says so...

  • @MultiPetercool
    @MultiPetercool3 жыл бұрын

    VMS is the main reason DEC died. Had they embraced Unix earlier, Sun Microsystems would have never existed. VAX was the must popular Unix platform till Sun came along. I had a job at Bell Laboratories in Murray Hill. Two of the first machines on Arpanet were there. They were a pair of VAXen called Alice and Rabbit. They were attached to a star coupler. A Unix Cluster! DEC totally blew it.

  • @jonathanvanier

    @jonathanvanier

    2 жыл бұрын

    They did miss the Unix boat (at least initially), but the real mistake was missing the workstation/pc boat. That's what killed DEC. As for VMS, it was superior to Unix, so it wasn't a bad thing to keep it around.

  • @MultiPetercool

    @MultiPetercool

    2 жыл бұрын

    @@jonathanvanier While the MIPS offerings were good, UNIX remained a red headed stepchild @ DEC. whether VMS was better or not isn’t the point. The market wanted UNIX.

  • @jonathanvanier

    @jonathanvanier

    2 жыл бұрын

    @@MultiPetercool This has become a bit of a legend at this point. I'm not sure why. Maybe because it's a simple narrative with an obvious culprit. But the actual story is actually much more interesting. While it's true that DEC was somewhat slow to embrace Unix, they did do so eventually. And in the meantime, they were still very much the platform of choice for Unix users. VAX remained the exclusive platform for BSD Unix until the 1990s (when it was ported to x86). In fact, Linux was born because the BSD port to x86 took so long and was marred in legal controversies. While the original AT&T Unix was of course developed on PDPs and didn't really start migrating away before the mid to late '80s with the arrival of 68000 based workstations. DEC's first official Unix release (Ultrix) was all the way back in 1984. And by the 1990s, they were all in with the OSF Unix initiative, leading to Digital Unix in 1992. So Unix was very much available and supported on their systems. As stated before, the real problem was that their offerings were quickly being outpaced by microcomputer systems. They equivocated for too long on what to do. Olsen didn't believe in the potential of microcomputers and failed to see them as a threat. By the time the danger became too obvious to dismiss, he stupidly decided that he didn't want to cannibalize the VAX sales (not unlike IBM's initial reluctance to enter the PC market or develop a 386 protected memory system). This led to the absurd cancellation of Cutler's PRISM project which would have been DEC's last chance to turn the tide (even though it was already kind of late). By the time DEC released their half-hearted MIPS based workstations, they were already a full decade late, with Sun workstations already having flooded the market. When Olsen was fired from the company he had founded, and the Alpha project was rushed to market, it was effectively too late. By then the server and workstation market was resolutely in the hands of RISC microcomputing platforms developed in the mid '80s when DEC was still equivocating on whether they would enter the market or not. In short, DEC was late, way too late in the microcomputer market. The company that created the minicomputer simply missed the boat on the microcomputer. Their downfall wasn't because of Unix; they did embrace it sooner than probably most people realize. The problem was that they didn't have a platform to compete in the microcomputing market. By the time they sort of did with the Alpha, the big iron Unix platforms' price to performance ratio was quickly becoming outpaced by Intel's offerings - which benefited from the economics of the huge PC market - and the gates soon opened to the Linux/x86 onslaught at the turn of the century. It's unlikely DEC would have survived that anyway. They might have, but at the cost of transforming themselves into a software company, selling their excellent database software and operating systems, not unlike Microsoft or Oracle. But that's squarely in alternate reality territory. In short, Unix is not what killed DEC. The microcomputer did.

  • @MultiPetercool

    @MultiPetercool

    2 жыл бұрын

    @@jonathanvanier You’re preaching to the choir, Dude! I started at DEC in 1989 and left after the Alpha debacle. I previously worked for Plexus. If you don’t know who they were google Onyx Systems.

  • @jonathanvanier

    @jonathanvanier

    2 жыл бұрын

    @@MultiPetercool You must have great stories about your time at DEC during these troubled times. I'd love to hear 'em!

  • @lawrencedoliveiro9104
    @lawrencedoliveiro91043 жыл бұрын

    42:52 The most useless kind of segmentation. Not like, say the old Burroughs machines implemented it.

  • @LeeCourtney
    @LeeCourtney5 жыл бұрын

    You know that the Itanium architecture was developed at HP by the team that originally did HPPA-RISC? And that for the most part the HP team that developed Itanic was sold off by HP to Intel where they continued to develop follow-on implementations. You should do some more research on the original of that architecture, did not originate at Intel at all.

  • @CamielVanderhoeven

    @CamielVanderhoeven

    5 жыл бұрын

    I was - and am - well aware of that. I could have mentioned that of course, but I'm not aware that I claimed the architecture originated at Intel, nor do I think that this is essential information given the goal of this presentation, which was to introduce bootcamp attendees to how x86 differs from the three earlier architectures OpenVMS ran on.

  • @peteherrera1502
    @peteherrera15022 жыл бұрын

    The entire x86 computer architecture is living on borrowed time. It's a dead platform walking. The future belongs to ARM, and Apple's A-series SoC's are leading the way. ... After that, we'll have a better idea of whether or not Intel can compete with ARM in portable computing.

  • @herrbonk3635

    @herrbonk3635

    2 жыл бұрын

    I remember many people said exacly the same in the mid 1980s, usually RISC proponents. The same people also said chips like the 486 or Pentium would be totally impossible to build, and slow as hell. (With that said, I sure does not appreciate x86 being so dominant, but neither ARM.)

  • @lelsewherelelsewhere9435

    @lelsewherelelsewhere9435

    2 жыл бұрын

    @@herrbonk3635I think the market is a bit different today. ARM got its current popularity due to cell phones. The requirements of modern cell phones and laptops drive trends in weird ways that have no equivalent in the market of the 1980s. (Though the pie in the sky idea of ARM suddenly taking over, which ignores the ingenuity of engineers working with "older" architectures, systems, etc., may be more difficult than people realize; as you point out, it's been promised to us before...)

  • @kylew4678
    @kylew4678 Жыл бұрын

    I think ia64 is cool tho.

Келесі