Sonsivri
 
*
Welcome, Guest. Please login or register.
Did you miss your activation email?
December 09, 2016, 02:33:59 14:33


Login with username, password and session length


Pages: [1] 2  All
Print
Author Topic: Chip Circuits, schematics, HDL, layouts, libs, and IP thread  (Read 7265 times)
0 Members and 1 Guest are viewing this topic.
solutions
Hero Member
*****
Offline Offline

Posts: 1446

Thank You
-Given: 591
-Receive: 851



« on: August 21, 2011, 10:41:26 22:41 »

Hi,

I think it would be useful if we had a thread where we could request and share schematics, files, and HDL for blocks and standard cells for IC design.

If you have or need IP or libraries, circuits, cells, blocks, SPICE blocks, or HDL - schematics, listings, layouts, library files, please post here.

thanks

Posted on: August 21, 2011, 10:39:14 22:39 - Automerged



REQ: Rail to Rail Comparator

What we'd like to have:

*rail to rail inputs, sensing to ground is the must..to Vcc not so much.
*5V, 3.3V, 3.3 to 5V (ideally), anything to 5V, anything to 10V operation, but needs to be single supply
* Ideally in 1um CMOS, though we can scale the design if we have to
* Temp performance -40 to 125C, though 80C may be OK
* Low offset, drift, and good sensitivity.
* Hysteresis is OK, have it or not doesn't matter
* NOT latched
* Continuous time (not switched or clocked)
* All biasing, current mirrors, included in schematic
* Ideally shows device widths
* Ideally a real design, versus a paper, but we'll take what we can get
* no patents on it that are still valid

This is for a standard cell, so please don't post things like an LM339. This is for an IC design we are playing with.

thanks.
« Last Edit: August 22, 2011, 08:30:56 08:30 by solutions » Logged
BuBEE
Newbie
*
Offline Offline

Posts: 29

Thank You
-Given: 78
-Receive: 14

- Idea -


« Reply #1 on: August 24, 2011, 03:01:57 15:01 »

I agree  Cheesy
Logged
solutions
Hero Member
*****
Offline Offline

Posts: 1446

Thank You
-Given: 591
-Receive: 851



« Reply #2 on: October 27, 2012, 11:10:22 23:10 »

So, does anyone have CMOS (or BiCMOS, bipolar) regulator, rail to rail comparator, dac/adc, controller IP (ideally schematics and GDSII layouts) that they can share? For chip design, not boards.
Logged
Matrixx
Junior Member
**
Offline Offline

Posts: 35

Thank You
-Given: 59
-Receive: 16


« Reply #3 on: October 29, 2012, 03:13:38 15:13 »

OPA2342, OPA4342 is good for rail-to-rail applications. Single supply capable.
I use in my designs.
Logged
Aldec_forever
Senior Member
****
 Muted
Offline Offline

Posts: 418

Thank You
-Given: 48
-Receive: 465



« Reply #4 on: December 02, 2012, 07:33:05 19:33 »

Please post here all requests about Digital & Analog-RF Designs & IPs (not Mechatronics, Microcontroller or simple spice related materials) . For good documentation, post Guides how to use it. If you would  be familiar with opencores.com in that site cores have been designed only in the RTL level (usually verilog or VHDL) and do not consider system level or technology constraints ( in some designs only implementations in FPGA platforms have been reported). My goal from creating this thread is taking system level considerations and going downstream to RTL Synthesizers (like Magma talus design, Synopsys design compiler etc.) and finally continuing to transistor or layout levels. I think this thread can be also good resources for integrating tools.
For the first try I have written the following C code snippet that is part of a communication filter.

#define   NUM_TAPS 8
void  fir_filter(int *input, int coeffs [NUM_TAPS], int *output) {
         static int<8> regs [NUM_TAPS];
         int temp = 0;
         int i;
         SHIFT : for (i = NUM_TAPS-1; i >= 0; i-- ) {
            if (i == 0)
               regs = *input;
            else
               regs = regs [i-1];
         }
         MAC : for (i = NUM_TAPS - 1; i >= 0; i--) {
            temp += coeffs * regs ;
         }
         *output = temp >> 7;
}


Can Anyone give a systematic flow how it can be implemented in real hardware (ASIC or FPGA) with all of the considerations like delays, area etc.. Please do not forget that name your using tools. If you are an ASIC designer, I will be pleased to hear your notes.

Posted on: October 09, 2011, 08:09:28 20:09 - Automerged

Hi everyone!
As any micro-architecture designer know hardware side of systems is composed of miscellaneous big and small parts (in RTL or sythesized formats). Sometimes rich companies are buying cores from IP providers without full descriptions about them (not part of agreement or trying to be secret (in an encrypted way)). In this situation, lots of test cases should be applied for better understanding the functionality inside the cores. This method is really time consuming and also trying to design every thing from scratch for lazy asses like me is killing.
Does anyone have knowledge about faster investigations, specially with mathematical methods like formal ones?
Thanks so much!
« Last Edit: December 02, 2012, 07:37:59 19:37 by Aldec_forever » Logged

People who know their place and function in society are much more secure and comfortable in life than those who do not.
TucoRamirez
Senior Member
****
Offline Offline

Posts: 277

Thank You
-Given: 197
-Receive: 95


Tuco ... dead or Alive


« Reply #5 on: December 02, 2012, 11:58:13 23:58 »

i remember we did a fir filter as an exercice with a fpga board, an adc and a dac...
the steps we did where:

1: study of the filter in matlab.
2: study of the quantisation and arithmethics adjusts to fit in simple cells.
3: calculations of the corrected coefficients.
4: adders and multiplyiers implementation in quartus
5: verification test ...

unfortunately i dont have my course material anymore, i think i have the fpga files from a friend... i'll dig in my disk anyway ...
Logged

Whoever double crosses me and leaves me alive... he understands nothing about Tuco.
fpgaguy
V.I.P
Junior Member
*****
Offline Offline

Posts: 89

Thank You
-Given: 98
-Receive: 86



« Reply #6 on: December 06, 2012, 12:11:36 00:11 »

for the FIR example one could have a possible path as such

1/ recode given code in systemC
1a/ code testbench in sv or systemC
2/ build in forte as behavioral model, and debug/verify using forte / verdi / or just systemC
2a/ Design/add I/O pipeline for model purpose only not for implementation
2b/ modify design for area/speed by adjusting SystemC code and forte pipeline options
3/ convert to equivalent RTL expression using forte
4/  verify RTL design with modelsim
5a/ if required add FPGA or ASIC specific macro blocks in forte
6/ repeat step 4 as needed
7/ export RTL to favority FPGA path

8/ in fpga project place this in your processing pipeline as an imported verilog module
9/ add your real I/O structures made elsewhere

this is very similar to the example design in forte


Logged
Aldec_forever
Senior Member
****
 Muted
Offline Offline

Posts: 418

Thank You
-Given: 48
-Receive: 465



« Reply #7 on: December 06, 2012, 01:18:31 13:18 »

Thanks to the guys for the reply!
Yes that's right. This flow is the most successful and the fastest way of designing and implementation. Finding the best coding style that every tool can handle it, is the most time consuming part. The HLS tools in the market are going to be global but their numbers are not much enough. Forte, CTOS, Synphony C. Their mechanism is almost the same. Unfortunately they are not covering the complexities of Processor designing and that's one of the weakness points of these kinds of tools.
BTW, do you fpgaguy have the license for FORTE? I am not the user of verdi (maybe due to running of it in Linux or maybe because, its company does not exist anymore and has been merged with synopsys) though I have heard of its power. Can someone please explain its main difference from Modelsim, Active HDL or even Riviera?
« Last Edit: December 06, 2012, 01:52:44 13:52 by Aldec_forever » Logged

People who know their place and function in society are much more secure and comfortable in life than those who do not.
solutions
Hero Member
*****
Offline Offline

Posts: 1446

Thank You
-Given: 591
-Receive: 851



« Reply #8 on: December 06, 2012, 03:58:35 15:58 »

Guys: Please keep the thread focused on the topic intent. You can always start a tools request elsewhere.

Proven and debugged libraries and designs/IP, including "Mechatronics, Microcontroller or simple spice related materials"

thanks
Logged
Aldec_forever
Senior Member
****
 Muted
Offline Offline

Posts: 418

Thank You
-Given: 48
-Receive: 465



« Reply #9 on: December 06, 2012, 09:51:56 21:51 »

Thanks solutions for the noticing!
Your created topic is so general and also IC circuits without discussing about the tools used in developing them is not complete enough. As you know, every company enforces its customers to use its own created design flows and tools and also finding any design that would be independent of tools is practically impossible. OK I admit that I should not ask any tool's license but as you know again, naming tools is vital for re-developing or using the IPs in the whole project.

cheers! Roll Eyes
« Last Edit: December 06, 2012, 09:54:56 21:54 by Aldec_forever » Logged

People who know their place and function in society are much more secure and comfortable in life than those who do not.
Aldec_forever
Senior Member
****
 Muted
Offline Offline

Posts: 418

Thank You
-Given: 48
-Receive: 465



« Reply #10 on: December 15, 2012, 04:07:24 16:07 »

Large schematic. Who can find out what's going on inside? Is there any method for reverse engineering and discovering the designer's intent?
Just zoom it.
« Last Edit: December 15, 2012, 07:22:37 19:22 by Aldec_forever » Logged

People who know their place and function in society are much more secure and comfortable in life than those who do not.
UncleBog
Junior Member
**
Online Online

Posts: 86

Thank You
-Given: 107
-Receive: 136


« Reply #11 on: December 15, 2012, 05:27:24 17:27 »

It looks like a machine generated schematic, probably from an HDL source intended for an FPGA or gate array ASIC. Many HDL development tools will generate such schematics.
Logged
Aldec_forever
Senior Member
****
 Muted
Offline Offline

Posts: 418

Thank You
-Given: 48
-Receive: 465



« Reply #12 on: December 15, 2012, 05:58:04 17:58 »

Yes you're totally right. But I am really interested in any tool or method that can help in the reverse manner. RTL compilers generate schematics and low level codes like the above one most of times unaware of the designer' view and the designer usually do not have any choice except only relying on the the tools' results because who will try to verify the gate level netlists (though my posted one is not gate but it's RTL) specially when the design is getting bigger and bigger.

If you can not understand the generated netlist by any tool or etc. you can never manipulate it in any way you like (For example changing it for DFT or even trying to turn it into a low power one).

The usual methodology is writing the behavioral code (For example in VHDL) and synthesizing it (e.g. by design compiler or XST or Vivado for FPGAs) but from this step down, about 90% of people do not try to investigate the generated netlist and only verify it by some non-complete wave-form based testbeches (even not assertion or formal based ones).

Suppose a company like INTEL. What are they doing for these steps for example suppose 1,000,000,000 transistors (approximately 250,000,000 gates) have been resulted.

I know that lots of tools exist from e.g. Cadence and Synopsys but still one systematic flow that could cover all the steps (Top-down flow) is required. Could someone discuss about it?

Thanks so much for everybody's attention!
« Last Edit: December 15, 2012, 06:25:59 18:25 by Aldec_forever » Logged

People who know their place and function in society are much more secure and comfortable in life than those who do not.
fpgaguy
V.I.P
Junior Member
*****
Offline Offline

Posts: 89

Thank You
-Given: 98
-Receive: 86



« Reply #13 on: December 17, 2012, 07:15:07 19:15 »

that schematic looks like a a recursive viterbi implementation for xilinx V6 - I see some max log MAP code, some V6 references, and


...

But who knows(!)

Logged
Aldec_forever
Senior Member
****
 Muted
Offline Offline

Posts: 418

Thank You
-Given: 48
-Receive: 465



« Reply #14 on: December 17, 2012, 08:16:31 20:16 »

FPGA companies try to encrypt the bit stream for protecting it form unauthorized access by pirates. In it (bit stream), codes are even lower than schematic, even lower than EDIF. But FPGA companies even have forecasted it and tried to develop some methods for anti-hacking. My posted schematic is in higher level than that and if it'd be for ASIC, it's obviously higher than layout. Suppose a team that its some major members have taken apart from the company and new team wants to discover the previous team's design structure. What should be done.......
« Last Edit: December 17, 2012, 09:48:38 21:48 by Aldec_forever » Logged

People who know their place and function in society are much more secure and comfortable in life than those who do not.
Aldec_forever
Senior Member
****
 Muted
Offline Offline

Posts: 418

Thank You
-Given: 48
-Receive: 465



« Reply #15 on: January 12, 2013, 03:38:05 15:38 »

A we know, CPUs are special kinds of ASIC designs. I have investigated lots of papers about maximum achievable frequencies in the ASIC field. Usually Max.Freq is identified by the critical path (worst path) of design after post place and route static timing analysis has been applied.

For ASICs the Max.freq are about 400 Mhz or a little higher. I know that this can be increased by using additional pipe-line registers or some other techniques but practical designs are being traded-off (Area v.s. Speed).

Though current scaled CMOS technologies allow to reach clock frequencies of several hundreds of megahertz, parallelization is still an effective methodology to achieve high throughputs and to approach the long term objective of 1 Gb/s in e.g. wireless communications. Furthermore, in high-throughput application-specific integrated circuit (ASIC) design, the adoption of lower frequency parallel architectures instead of higher frequency serial ones is an effective method to combat unreliability and reduce nonrecurrent costs. This is one of the reasons of not considering higher frequencies.

My question is this?

What does the "3.1 GHZ" in e.g. intel core i7 series mean exactly? We know that this digital clock signal can not even be created with the current usual methods. So what's that for?
Logged

People who know their place and function in society are much more secure and comfortable in life than those who do not.
solutions
Hero Member
*****
Offline Offline

Posts: 1446

Thank You
-Given: 591
-Receive: 851



« Reply #16 on: January 12, 2013, 07:14:36 19:14 »

We had low jitter 20GHz PLL's in 28nm, 11GHz at 40 nm. PRODUCTION.

I've even overclocked 65nm at 10Gb/s on the bench and for customer demos under NDA back then (5 or 6 years ago). It's even easier if you don't care about pristine jitter and use a RO or you don't need I/O running at those bit rates, confining the high frequencies to core transistor use. Being thicker oxide, I/O transistors tend to be a fair bit slower.

All you need is power (the direct tradeoff is power vs speed...area is a consequence of power), usually source-coupled or CML kinds of current-steering topologies, and, as a man, accept that HDL is for girls  Tongue and you need to SPICE the crap out of every aspect of the circuits....

With that acceptance, the clocks are indeed running at those frequencies in an Intel chip and they are distributed in the tree using DLLs and as differential signals if not restricted to local use.

YMMV, but 400MHz is "DC" in comms chips and high end processors.

FWIW, Hairapetian et al were doing 10Gb/s in 180nm and 130nm CMOS over a decade ago, making them multi-millionaires when Broadcom scooped them up.
« Last Edit: January 12, 2013, 07:20:52 19:20 by solutions » Logged
Bobbla
Newbie
*
Offline Offline

Posts: 22

Thank You
-Given: 2
-Receive: 16


« Reply #17 on: January 12, 2013, 07:42:45 19:42 »

What does the "3.1 GHZ" in e.g. intel core i7 series mean exactly? We know that this digital clock signal can not even be created with the current usual methods. So what's that for?

Well, don't quote me on this, but. The way I understand it is that they do actually run at Ghz speed inside the CPU. They way I understand it, they actually do design the whole thing from a transistor level. There is a lot of abstraction, but they don't deal with FPGA like gates. I believe this is what you call a full-custom design.

But to answer your "3.1GHz" question.. You can say that a PIC MCU receives a external clock signal of 1 MHz, it then uses a PLL of 4 to ramp up the internal clock speed to 4 MHz. The PIC's 4 MHz is the equivalent to the i7's 3.1 Ghz, however you might find that the PIC uses a bus to communicate for external communication. Lets say it uses UART of 9600 baud, it will use the internal system clock of 4 MHz and some logic to scale down the clock frequency to 9600 Hz. In this case the PIC's UART is equivalent to many of the i7's buses.

In the good old days there was something called a northbridge which connected the CPU with memory, GPU and the southbridge. The nortbridge was connected to the CPU via the front side bus(FSB), one could say that the UART example is _sorta_ equivalent to the FSB if the PIC MCU was the "Brain" of a larger system. Today however neither AMD nor Intel uses a northbridge, they have integrated it all into the CPU chip.

So you could say that the PIC's UART/SPI/I2C/etc.. is equivalent to the i7's PCIe/DDR3/DMI/etc.. It should be noted that the PIC MCU is a much more complete system then the i7 which rely on a great many other components to function properly.

Again... don't quote me on anything... I am no expert.. and the more I write the more I doubt..
The old way.. http://www.yale.edu/pclt/PCHW/clockidea.htm

Cheers...

edit: a bummer, someone beat me to it and I seem to have gone a little off topic... maybe. But at least solutions seems to know this stuff.
« Last Edit: January 12, 2013, 07:46:29 19:46 by Bobbla » Logged
Aldec_forever
Senior Member
****
 Muted
Offline Offline

Posts: 418

Thank You
-Given: 48
-Receive: 465



« Reply #18 on: January 13, 2013, 02:44:42 02:44 »

The point is that clock speed was a relevant metric when CPUs obeyed Moore's Law in the next sense.

Suppose that we have a CPU design implemented in a concrete CMOS node, this is, we have a CPU implemented with CMOS transistors that cannot be smaller than a fixed size. After synthesizing and implementing physically the CPU, the maximum clock speed is determined by the worst-case timing of the slower "RTL" path.

If we maintain the original CPU design, but shrink the CMOS transistor size, there was a time in which the clock speed increased as the worst-case timing shrinked too. This behavior arose from the fact that in a RTL transaction, delay was mostly dominated by the CMOS switching speed.

In this way, clock speed was a direct measurement of performance for a given CPU family.

But about ten years ago, when CMOS nodes were in the 100nm boundaries, the situation dramatically changed as communication, data propagation throughout "metal wires", became more expensive in terms of delay, power & even costs than "silicon transistors" were.

In this situation, shrinking transistor size doesn't translate in a better CPU performance. The maximum frequency doesn't increase (In fact, during the 10 last years maximum CPU frequency has remained practically unchanged in x86 families). Jumping to a deeper transistor node only means that you can pack more transistors in the same die size!!

This fact was used by general purpose CPUs companies for introducing multicore designs and, by this way, they've keep in pace with Moore's Law. By running N processors concurrently, in parallel, you should have the possibility of increase performance by a factor N.

When working with FPGAs, concurrency & parallelism can be driven to a bit level operation if necessary. For this reason, FPGAs are able to outperform theoretically faster CPUs when executing a concrete algorithm.

For this reason, GHz order clock frequency, although physically feasible, is not a realistic metric in order to measure processing power.

We had low jitter 20GHz PLL's in 28nm, 11GHz at 40 nm. PRODUCTION.

I've even overclocked 65nm at 10Gb/s on the bench and for customer demos under NDA back then (5 or 6 years ago). It's even easier if you don't care about pristine jitter and use a RO or you don't need I/O running at those bit rates, confining the high frequencies to core transistor use. Being thicker oxide, I/O transistors tend to be a fair bit slower.

All you need is power (the direct tradeoff is power vs speed...area is a consequence of power), usually source-coupled or CML kinds of current-steering topologies, and, as a man, accept that HDL is for girls  Tongue and you need to SPICE the crap out of every aspect of the circuits....

With that acceptance, the clocks are indeed running at those frequencies in an Intel chip and they are distributed in the tree using DLLs and as differential signals if not restricted to local use.

YMMV, but 400MHz is "DC" in comms chips and high end processors.

FWIW, Hairapetian et al were doing 10Gb/s in 180nm and 130nm CMOS over a decade ago, making them multi-millionaires when Broadcom scooped them up.
They probably reached to that speed parallelly not serially. I mean suppose a 100-bit single port. If every single line outputs the chunk of data at the speed of 100 Mb/s, then the aggregate speed would be 100*100 Mb/s = 10 Gb/s.

As you know GHz is different from GBit because Digital is different from Analog or RF. Yes you're right, comms chips will run even faster in tens of GHz but they are not doing processing nor algorithms but simple functions like amplification. Super computers will even have throughput of tens or hundreds of Tera bits/s but they never use the higher frequencies because nowadays CMOS technologies can not stand that. They do the tasks parallelly.

HDL is the design entry for describing the functionality of designs by considering the notation of Cycle. Most of hardware designers use behavior-style codes at the first step and synthesize it. Then fine tuning starts. The nowadays systems are so complex that even HDLs are unable to describe the functionality. In this case no one would lower the abstraction level to transistor one. It's like someone would try to write complex applications not even in assembly but in machine code.
...........................
Does any one have any formal statistics about the people working on the different abstraction levels? Most IP core providers work on system or RTL levels and few in lower levels because, today's node and synthesis technology and also applications allows the designers to work at higher levels without being worry about the Max freq and area of their final implementations. For example when you use "+" (plus sign) inside your RTL for adding two variables, you are not thinking about the adder structure of final implementation at that moment and expect the synthesizer to select the best cell.
Which FPGA designer would think of implementing high speed adder like below papers:
http://cseweb.ucsd.edu/classes/fa06/cse246/lingadder.pdf
http://www.ijsce.org/attachments/File/v2i3/C0820062312.pdf
http://downloads.hindawi.com/isrn/electronics/2012/253742.pdf
..........................
I have also faced with people (best at playing with transistors) but without being able to exploit and schedule even a simple algorithm/task at the RTL or higher levels. I still think the most complex part of any design is writing delay-aware (cell and net) RTL by the help of synthesizers and static timing analyzers. After RTL was written you can replace the adders with your intended ones for controlling the critical path and also Area factors. Working at the transistor level is really a tedious task and really error-proven and with sticking to that, You will never have any perspective from today's complex hardwares. Sometimes RTL is also an obstacle and very time-consuming and you need to think like assembly line engineers (buying parts from third party providers and mix them together).
..........................
Applications are fixed (like the communications mathematics) but technologies are being changed. For example lots of nowadays theories in the field of communications have been proposed even before 1940 decade and no new ones are proposed today except the ones that are related to the speed that are subsequence of digital technology changing in the node-level but their forms are still fixed. This shows it's better to have systemic view to the problems rather than trying to fine-tune every thing. There are always people that do that for you but believe me, those people are technology dependent (like layout designers) and will be fade away with the technology changes.
..........................
Below is a simple scrambler circuit. It took me only 1 hour to code it and about 10 minutes to fully test it with random vectors. I can guarantee the Max frequency of 180 Mhz at vertex-5 family without being worry about the place and route process. It's in the form that even cycle delays and sharing properties with muxes that naturally is one of the delay sources, have been fully considered. Who has time to fine tune it to operate in 300 Mhz (int the best condition) that 90% of times is not needed?
« Last Edit: January 13, 2013, 08:06:45 08:06 by Aldec_forever » Logged

People who know their place and function in society are much more secure and comfortable in life than those who do not.
solutions
Hero Member
*****
Offline Offline

Posts: 1446

Thank You
-Given: 591
-Receive: 851



« Reply #19 on: January 13, 2013, 08:40:24 08:40 »

You've been listening to Intel's marketing BS. THEY tried to take clock speed off the table as a metric because they were in a cold war on clock speed with AMD (who seem to be winning the overclocker records these days).

FPGA core clock speed continues to scale with each node. And, as always, there is a speed/power tradeoff - these days power is more important in most ASSPs/ASICS, so a lower leakage, higher Vt, SLOWER device is used to reduce static power at the expense of speed. Scaling continues, but it has always been speed*power scaling at each node, with short channel effects creating more leakage in recent and future nodes...part of the reason for FINFETs.

And, as I said before, if you take your head out of synthesis and custom design your critical paths in SPICE, you can get MUCH higher clock speeds than you ever could by synthesis alone.

And applications are not fixed...they are enabled. I was in the group that defined DSL in the 1990's...had you said you could put 6Mb/s and more on phone wires, you'd have been locked up in a straight jacket with the key thrown away in the 1940's. There are also lots of new theories and methods being presented, as there always have been - if there weren't, the IEEE paper requests topic here would see zero traffic.

You are getting old and shutting your brain down to new things, settling in your comfort zone.  Go off and do a 28Gb/s SERDES design at the transistor level.....bet you won't because HDL is so easy and familiar and FPGAs carry zero risk compared to an ASIC whose proximity corrected masks cost $16M. That SERDES, by the way, will need a scrambler...at speed....
Logged
Aldec_forever
Senior Member
****
 Muted
Offline Offline

Posts: 418

Thank You
-Given: 48
-Receive: 465



« Reply #20 on: January 13, 2013, 10:07:03 10:07 »

You've been listening to Intel's marketing BS. THEY tried to take clock speed off the table as a metric because they were in a cold war on clock speed with AMD (who seem to be winning the overclocker records these days).

FPGA core clock speed continues to scale with each node. And, as always, there is a speed/power tradeoff - these days power is more important in most ASSPs/ASICS, so a lower leakage, higher Vt, SLOWER device is used to reduce static power at the expense of speed. Scaling continues, but it has always been speed*power scaling at each node, with short channel effects creating more leakage in recent and future nodes...part of the reason for FINFETs.

And, as I said before, if you take your head out of synthesis and custom design your critical paths in SPICE, you can get MUCH higher clock speeds than you ever could by synthesis alone.

And applications are not fixed...they are enabled. I was in the group that defined DSL in the 1990's...had you said you could put 6Mb/s and more on phone wires, you'd have been locked up in a straight jacket with the key thrown away in the 1940's. There are also lots of new theories and methods being presented, as there always have been - if there weren't, the IEEE paper requests topic here would see zero traffic.

You are getting old and shutting your brain down to new things, settling in your comfort zone.  Go off and do a 28Gb/s SERDES design at the transistor level.....bet you won't because HDL is so easy and familiar and FPGAs carry zero risk compared to an ASIC whose proximity corrected masks cost $16M. That SERDES, by the way, will need a scrambler...at speed....

I agree with you that trying to customize every thing is good but the big problem is timing constraint to the market. In large companies with lots of experienced people in all fields, your thought can be realized well but if you'd be at home and do everything yourself, finding new ways that automate some parts of your design would be vital.

I believe EDA tools or platforms like FPGAs are the sweet gifts. They are so complex that trying to learn them and mapping hardware and software parts to them will need people with excellent backgrounds. I can imagine that you have not dug into the HDL and even have not done any real design using any HDL. Using them in some designs are so complex that even of the shelf verification methodologies and tools can not debug them. I read some where that a small bug in the 4,000,000 lines of RTL code in Intel inc,cost them hundreds of millions $. Who is saying the big companies do not use HDLs. I suggest you please read the following books. They will change your view:
Digital Design of Signal Processing Systems
Guide to FPGA Implementation of Arithmetic Functions
Design for Embedded Image Processing on FPGAs
FPGA-based Implementation of Complex Signal Processing System
Digital Signal Processing with Field Programmable Gate Arrays
The Art of Hardware Architecture
Algorithm-Architecture Matching for Signal and Image Processing
Advanced FPGA Design - Architecture, Implementation, and Optimization
100 Power Tips for FPGA Designers - Stavinov, Evgeni


If you look at the circuit I have posted, it only shows the components, operators and registers. The job of RTL is only this. RTL can not contain physical characteristics like the transistors parameters. The timing, most of area optimizations and even power optimizations are identified by RTL. After synthesizing, if the result would not be satisfying, you can focus on every part of it and fine tune it but I am sure that you will never and never be able to describe the whole properties of design by transistors when you start from it. If your design consists of e.g. 50,000,000 transistors, how you can fine tune them. Can you please give a clue? I am waiting to hear your suggestions. Your thinking is come back to 1980 decade that HDLs were emerging and most people were complaining about them. I really don't know why these people are still insisting not to use it. These thoughts will limit your creativity.

By the way, I really don't understand when you are talking about SPICE and overselling it. It's only a small program that solves some differential and matrix-based equations. It is only able to find little variables and is not so complex that can guide you through the huge aspects of design like place and routing criteria , etc. I'd be pleased to hear you talking about your kinds of designs and you're using tools.
Thanks!
« Last Edit: January 13, 2013, 03:37:33 15:37 by Aldec_forever » Logged

People who know their place and function in society are much more secure and comfortable in life than those who do not.
Aldec_forever
Senior Member
****
 Muted
Offline Offline

Posts: 418

Thank You
-Given: 48
-Receive: 465



« Reply #21 on: January 17, 2013, 04:34:10 04:34 »

Generic Viterbi decoder Generator:
(After generating the Verilog netlist, use StarVision (http://www.sonsivri.to/forum/index.php?topic=49466.0) for viewing the schematic)
Logged

People who know their place and function in society are much more secure and comfortable in life than those who do not.
mrh
Junior Member
**
 Muted
Offline Offline

Posts: 60

Thank You
-Given: 96
-Receive: 16


« Reply #22 on: June 26, 2013, 08:05:15 20:05 »

Hi
I'm not sure this is the right place to bring this up but I'm looking for a commercial "Ethernet 10/100/1000 MAC IP Core" (Tri Mode Ethernet MAC) Or just "Gigabit Ethernet MAC IP Core" to use with Xilinx Kintex-7. We're making a custome hardware including Kintex-7 and Marvell 88E1111 as a PHY.
Also looking for "IP" and "TCP/UDP" Stack implementation too.
So do you know any famous vendor who has something good ?

Please help. I've been looking for this for a pretty much long time but I'm not sure which is the greatest vendor ...
for example found this:
http://comblock.com/download/com5401soft.pdf
http://comblock.com/com5402soft.html

or http://arasan.com/products/wireline-interface/ethernet/gigabit-ethernet/

http://www.cast-inc.com/ip-cores/interfaces/mac-1g/index.html
http://www.cast-inc.com/ip-cores/interfaces/udpip/index.html

Huh
« Last Edit: June 26, 2013, 09:10:51 21:10 by mrh » Logged
solutions
Hero Member
*****
Offline Offline

Posts: 1446

Thank You
-Given: 591
-Receive: 851



« Reply #23 on: June 26, 2013, 10:26:39 22:26 »

Why just the external PHY? Why not an external  integrated MAC/PHY? It'll be one heck of a lot cheaper than implementing it in an FPGA.

The latest generation FPGAs have hardened IP for a MAC if you're really bent on having it inside your FPGA.

Soft IP for an Ethernet MAC on an FPGA is a very obsolete concept these days.
Logged
Aldec_forever
Senior Member
****
 Muted
Offline Offline

Posts: 418

Thank You
-Given: 48
-Receive: 465



« Reply #24 on: September 07, 2013, 12:01:26 00:01 »

A fast base-2 anti-logarithm function, 10 bits in, 24 bits out. Executes every cycle, with a latency of 2.
SystemC source code. Applicable at 150 MHz in Xilinx FPGAs
No testbench is included.
« Last Edit: September 07, 2013, 01:02:44 01:02 by Aldec_forever » Logged

People who know their place and function in society are much more secure and comfortable in life than those who do not.
Pages: [1] 2  All
Print
Jump to:  


DISCLAIMER
WE DONT HOST ANY ILLEGAL FILES ON THE SERVER
USE CONTACT US TO REPORT ILLEGAL FILES
ADMINISTRATORS CANNOT BE HELD RESPONSIBLE FOR USERS POSTS AND LINKS

... Copyright 2003-2999 Sonsivri.to ...
Powered by SMF 1.1.18 | SMF © 2006-2009, Simple Machines LLC | HarzeM Dilber MC