Two trends call into question the current practice of microprocessors and DRAMs being fabricated as different chips on different fab lines: 1) the gap between processor and DRAM speed is growing at 50% per year; and 2) the size and organization of memory on a single DRAM chip is becoming awkward to use yet size is growing at 60% per year.

Intelligent RAM, or IRAM, merges processing and memory into a single chip to lower memory latency, increase memory bandwidth, improve energy efficiency, and reduce size. Surprisingly, the integration of the processor/cache/memory of IRAM with with high-speed serial I/O lines may also lead to very good I/O performance.

This talk explores some of the opportunities and challenges for IRAMs, suggests that to IRAM's potential we need better ideas than "Let's build a bigger cache!"

After looking at some potential applications, such as Intelligent personal digital assistants with speech input and intelligent disks for decision support, we propose reviving vector architectures. Vector architectures leverage the high-bandwidth, low-latency memory of IRAM; match the narrower widths and real-time demands of multimedia; and scales well with increasing number of transistors and wire-delay challenges of future integrated circuits.

I conclude by speculating on applications for a DRAM-size chip in 2-3 years that consumes 2-10 watts of power, contains 16-24 MBytes of memory, has about 1 GByte/sec of I/O, and crunches at the rate of 1-4 GFLOPS (64-bit floating point) and 8-32 GOPS (8-bit fixed point).

Today, the semiconductor industry is sharply divided into processor and memory camps. If IRAM proves successful, unification may come to the semiconductor industry. In such a future, its unclear who will ship the most memory and which will ship the most processors.

(To follow the IRAM project, see /http://iram.cs.berkeley.edu/)