64bit data 128Bit address...Re: Building a Z-80 (64bit!

From: Hans Franke <franke_at_sbs.de>
Date: Mon Oct 26 13:40:01 1998

>> "Max Eskin" <maxeskin_at_hotmail.com> wrote:
>>> No application _requires_ any number of bits > 1. It's a question of
>>> performance. After all, a Z80 could have 512M RAM, just not
>>> contiguously (and would probably require a lot of hardware to access
>>> it).
>>
>> OK, then the Z80 system will require 19 bits of address. Sure,
>> some of those bits aren't coming directly out of the CPU, but
>> they're coming from somewhere.

> Actually the figure is 29 bits (it was 512M not 512K) but I agree with you
> 100% in principle.

> The way I look at it is this: [...]

> I therefore see address buses growing at 16 bits every 30 years. That's
> just over a bit every 2 years - slower than I expected but not much.
> Someone (I forget who) said that memory chips double in capacity every 18
> months. This would give 16 bits in 24 years.

Interesting szenario, especialy when connected to the Mores Law
(didn't he tell this regarding integration ?).


> I claim that the assertion that we'll see even 64-bit address spaces being
> used anything like up by 2003 is unfounded. According to that growth rate
> above, we will start hitting the limit of 48-bit addressing - 256 TWord -
> in the '20s, and the limit of 64-bit addressing, 16 exawords (or exabytes,
> possibly), in the '50s (or '40s at 1 bit per 18 months). Many of us will
> probably still be alive then (I shall be celebrating my 83rd birthday in
> March of 2050 )

Hmm I will have my 88th by then - jets join :)

> - and I for one would like to see what sort of technology
> will be used to store 16 exabytes in a space smaller than a mountain!

The size isn't the real problem - you already get 16 Gig in less
than 320 cm^3 (using hard disk technology) which is more than
100 Meg per cm^3, which gives us 100x100x100x100 Meg or 100 Tera
per m^3 (Only heat will be a problem, but if we assume that this
will shrink by the factor 2 within the next few years, we get
enough space for cooling without developing a new technology).
100 Tera are 100x2^40 Bytes so, for 16 exabytes you need
10x2^18 m^3 or 64x64x64x10 m^3 - just the size of a ordinary
160 store skyscraper. Nothing real big - isn't it? - and especialy
not a mountain. and if we assume a increasing density by 10 within
the next years, it is less than a warehouse.

This is all just (near) todays technology - the real problem
is the access time .... A wire could come up to 100m between
a starage device at the perhipherieal area and a 64 Bit computer
in the middle - 100m thats just 1/3.000.000s or 333ns traveling
time ... seams we have created some kind of piplineing prior
to the CPU :) So, calulating a 1 us round trip time, we just
could runn a 4 MHz Z80 ... hmm didn't he ask for 64 Bit Z80 ?

(I just left the disc acces time out of calculation, but acording
to any information availabel from disk manufacturers the internal
caches will eliminate this almost to zero :)

Gruss
Hans

P.S.: for 128 Bit address range we just have to enhance the building
by a bit more than 1.000.000 in each direction. Giving a size of
5x128.000.000x128.000.000x128.000.000 m^3 or
5x128.000x128.000x128.000 km^3 compared to volume of earth
   20.000x 20.000x 20.000 km^3 (just my memory)
And dont forget the traveling time of signals of something like
.6 seconds across the cube.

--
Ich denke, also bin ich, also gut
HRK
Received on Mon Oct 26 1998 - 13:40:01 GMT

This archive was generated by hypermail 2.3.0 : Fri Oct 10 2014 - 23:31:29 BST