System Verilog (SV) :


DUOLOS has tons of docs on everything: http://www.doulos.com/

For SV tutoria on Duolosl: http://www.doulos.com/knowhow/sysverilog/tutorial/

This link has very nicely put tutorial on SV: https://verificationguide.com/systemverilog/

SV has gone thru lots of updates since it's initial version. SV 2009 is the one most widely supported by all tools.
New features in SV2009: http://www.testbench.in/SystemVerilog_2009_enhancements.html

SV is both a design and test language, It's design constructs are mostly from Verilog, so we won't cover those here. We are going to cover the test constructs which were all added from multiple languages. SystemVerilog language components are:

  • Concepts of Verilog HDL
  • Testbench constructs based on Vera
  • OpenVera assertions
  • Synopsys’ VCS DirectC simulation interface to C and C++
  • A coverage application programming interface that provides links to coverage metrics

Testbench or Verification Environment is used to check the functional correctness of the Design Under Test (DUT) by generating and driving a predefined input sequence to a design, capturing the design output and comparing with-respect-to expected output.

The verification environment can be written by using SystemVerilog concepts. SV added object oriented programming (OOP) to Verilog to aid in Verification. Before we get into Writing TB with SV, let's look into some basic concepts.


SV data types:


NOTE: verilog data types are bit vectors and arrays, no other data type for storing complex structures. We create static arrays of bits for storing char, string, etc in verilog. Struct, union were added later to verilog. SV uses OOP to create complex data types with routines to operate on them. In OOP, there is a handle pointing to every object. These handles may be saved in arrays, queues, mailbox, etc.

data_type: wire/reg/logic are structural data type, while others are behavioural data types.

Structural Data Type:

wire, reg, logic are structural data types

Integer type:

We can have 2 state or 4 state integer type.
2 state integer type (0 or 1), default value is "0", so any unintialized variables start with "0" (which might mask startup errors)

bit = 0,1. unsigned. ex: bit[3:0] a;
byte/shortint/int/longint = 8/16/32/64 bits, signed. ex: byte a,b;=> a,b can be from -128 to +127. longint a;

NOTE: 2 state var save memory in SV and improve simulator performance. To check o/p ports of DUT which can have 4 values, use this
if ($isunknown(iport) $display("@%0d: 4-state value detected on input port",$time, iport); => returns 1 if any bit of expr is x or z

4 state integer type (0,1,X,Z), default valus is "x" so that any unintialized variables start with x (useful in detecting errors).

reg, logic = identical to reg, can be used for both wire and reg. unsigned. ex: logic[7:0] a;
NOTE: logic[7:0] can't be replaced with byte as logic[7:0] is unsigned and so it ranges from 0 to 255, while byte is from -128 to 127
integer = 32 bits signed. It's diff than "int" as all 4 vales allowed. "integer" is the one that is allowed in verilog for defining integers. integer in SV(4 values) is different than integer in verilog(2 values). "int" is only allowd in SV.
wire = identical to verilog wire, but default value is "z". unsigned. We use logic instead of wire, as it can be used for both reg/wire.
time = 64 bit unsigned. All 4 values (0,1,X,Z) allowed in SV, but only 2 values (0,1) allowed in verilog.

NOTE: when we assign a 4 state variable to 2 state variable, then Z,X get assigned to 0.

Non integer type:

shortreal, real, realtime = like float/double/double in C. ex: realtime now; (realtime allows us to specify time in real number)

string: to avoid using reg for storing strings as that was cumbersome. uses dynmaic array for allocation. Each char is of type byte
-----
string s; s = "SystemVerilog";

wreal:
----
wire can also be assigned to real numbers to model for voltage/current. just using "real" doesn't allow signals to flow. This type needed in DMS (digital mixed signal) to model analog blocks. However, since wreal is sv keyword, you need to make file as .sv (i.e file is named xy.sv) so that we can use wreal.
wreal q;
assign q = 1.279 * a; //assignes the real value to wire

wrealsum => if 2 drivers driving, value on wire is sum of 2.
wrealmax => if 2 drivers driving, value on wire is max of 2.

wreal generally used for pwr supplies:
input VDD;
wrealsum VDD;
out = (VDD > MIN_SUPPLY) ? 1'b1 ? 1'bx; //assigns out to "x", if pwr supply too low.

NOTE: if gate netlist is generated using this RTL, then VDD may be defined as wire, which will cause compilation issues as VDD is wreal inside model. To get around that, change "wire" to "interconnect". In connecting models to digtop also, we use "interconnect" as it allows any signal to flow on it.

wreal model for Low pass filter:
---------------
module LPF (input wreal Vin, output wreal Vout);
initial begin
 #1 int_clk = 0;
 #1 int_clk = 1;
 Vout = Vin;
end

always @(int_clk) begin
 Vout = Vout + Vin/100; //put more conditions to check that Vout doesn't exceed Vin.
 int_clk <= #1 ~int_clk;
end

endmodule

 



Typedef: allows to create own names for type definitions that they will use frequently in the code

typedef reg [7:0] octet_t; => define new type. *_t usually used for user defined types
octet_t b; => creating b with that type
above 2 lines same as => reg [7:0] b;

enum: enumaerated data types allows us to define a data type whose values have names.
-----
enum { circle, ellipse, freeform } curve; => named values circle, etc here act like constants. The default type is int. So, anything defined to be of curve data type can take any value circle, ellipse, freeform all of which are int. so int 0=circel, 1=ellipse and so on. default basetype can be changed to anything as follows:
enum byte { circle, ellipse, freeform } curve; => here basetype is byte so byte 00=circle, 01=ellipse and so on.

Typedef is commonly used together with enum, like this:

typedef enum { circle, ellipse, freeform } ClosedCurve;
ClosedCurve c; => here c can take any value of datatype ClosedCurve (circle, ellipse, freeform). Here c is an integer, so any usage appr for int would be appr here.
c = freeform; => correct
c = 2; => however, this is incorrect as enum are strongly typed, so cpoying numeric value into a variable of enumeration type is not allowed, unless you use a type-cast:
c = ClosedCurve'(2); => casts int 2 to ClosedCurve type. So, c = freeform, as circle=0, ellipse=1 and so on. when using "c" anywhere, we are still working with c's integer equiv.
If we display "c" using %d, we can see it's int value. So, we can do comparison, arithmetic, etc with "c".

built in methods for enum: (i.e methods defined as "function enum first ();".
f = c.first(); =>  f gets c's first element = circle
f = c.name(); => f gets c's name (i.e circle, ellipse, freeform)
f = c.num(); => f gets c's integer value (it's 0 for circle, 1 for ellipse and so on)
f = c.next(); =>  f gets c's next element
f = c.last(); =>  f gets c's last element = freeform

ex: typedef enum shortint { circle, ellipse, freeform } ClosedCurve; => here ClosedCurve type can only take shortInt values and not 32 bit int.

 



package: similar to vhdl. container for shared declarations. NOT processes so no assign/always/initial blocks can be inside package.

package types_pkg; => declaring a pkg of name types_pkg
 typedef enum {ACTIVE,INACTIVE} mode_t; => defines a type mode_t
 class base; ... endclass
endpackage

module a;
 types_pkg::mode_t mode; => calling type mode_t from pkg types_pkg, and assigning it to variable mode. "::" is class scope resolution operator. If types_pkg is in file file1.sv, then that file has to be as one of the files to be compiled, else this pkg can't be found.
 import types_pkg::*; => can also do this. This imports all class/defn etc from package types_pkg, so that can be used in module "a" below
 ...
endmodule



struct and union: similar to C

struct {
  int x, y;
} p;

p.x = 1; => Struct members are selected using the .name syntax:
p = {1,2}; => Structure literals and expressions may be formed using braces.
p = {x:1,y:2} => named assignments

typedef struct {bit [7:0] r, g, b;} pixel_s; => defines typedef so that this struct can be shared across routines.
pixel_s my_pixel;

union: they save mem, are useful when you frequently need to read and write a register in several different formats.
ex:
typedef union { int i; real f; } num_u;
num_u un;
un.f = 0.0; // set n in floating point format

##########################

 



class:

user-defined data type. Classes consist of data (called properties) and tasks and functions to access the data (called methods). class instances are dynamic objects, declaring a class instance does not allocate memory space for object. Calling built in new() function creates memory for object. When we call methods, we pass handle of object, not the object itself. This is similar to using ref in SV, where address of arg is passed (if ref is not used in SV, then a copy is made of local var, and that is passed).
structs are static objects, so take up mem from start of pgm itself.

class declaration: class can be defined in program, module, package or outside of all of these. Good approach is to declare them outside of program or module and in a package. Class can be used (instantiated) in program and module only.
------------------
class C;
  int x; => by default all class member are publicly visible in SV (in contrast to pgm language, where they are are private by default. To hide it, it must be declared local: local int x; However, if declared local, it can't be accessed in extended classes, so we can declare it as protected int x; we can also use rand to any varaible to randomize it: protected rand integer x;
  stats_class stats; => here class C contains an inst of another class "stats_class". stats is handle to object stats. If stats_class is defined later, then to prevent compilation error use typedef before calling this class "typedef class stats_class;"
  static int count = 0; // count is static var, so is shared with all inst of class . It can be used to count Number of objects created by incrementing it in new() fn everytime it's called. static var are stored with class, and NOT the object, that's why they are able to retain value.
  function new (int a, ...); x=a; endfunction => initializes variables when new object is created. If new fn not defined, then default new constructor used.
  function new (logic [31:0] a=5, b=3...); x=b; endfunction => This assigns default values to args. So, if new(5,7) called then a=5, b=7, but if new() called then a=5, b=3.
 NOTE: this new in SV is diff than new[] used in verilog for dynamic array
  task set (int i);
    x = i;
  endtask
  function int get; => If we want to declare this fn outside of this class, use "function int C::get;". then use "extern function int get;" inside the class to tell the compiler.
    return x;
  endfunction
endclass => can also write as endclass:C => labels are useful in viewing, as we know which end corresponds to which start.

create object:
-------------
 C c1; => declares c1 to be C, i.e c1 can contain a handle to object of class C. It is init to NULL.
 c1 = new; => new assigns space for object of type C, initializes it (based on whatever default is for reg, bit, int, etc is in SV for that data type )and assigns its handle (returns it's address) to c1
or above 2 can be combined in one as:
 C c1 = new;

delete object:
------------
 c1 = null; => this assigns NULL pointer so object is deallocated by SV.
 Garbage collection is automatically run, and GC checks periodically of how many handles point to an object. If objects are no longer referenced, it implies object is unused, and hence freed.

using class:
-----------
initial
begin
  c1.set(3); => or we can say c1.x=3 (if we used local in x, then x can't be accessed this way).  
  $display("c1.x is %d", c1.get()); => In strict OOP, we should use use private methods get() and put()for accessing var of obj. However in SV, we may relax the rule and access them directly as it's more convenient.
end

extending class (inheritance):
-----------------------------
class ShiftRegister extends Register;
  task shiftleft;  data = data << 1; endtask
  task shiftright; data = data >> 1; endtask
endclass

scoping rules in SV:
------------------
scope = block of code such as a module, program, task, function, class or being-end block.The for and foreach loops automatically create
a block so that an index variable can be declared or created local to the scope of the loop.
You can define new variables in a block. New in SV is the ability to declare a variable in an unnamed begin-end block. A name can be relative to the current scope or absolute starting with $root. For a relative name, SV looks up the list of scopes until it finds a match. So, if we forget to declare var in local scope, then SV will use one from higher scope w/o warning if it finds one with same name. If you want to be unambiguous, use $root at the start of a name. "this" allows access to var from that class. So, this.name refers to "name" var within that class. If it's not present, SV will error out.
ex: int limit; // this can be accessed using $root.limit
 program p; int limit; ... endprogram // this can be accessed using $root.p.limit


constraints in class: All constraints in class need to be satisfied with randomize() call, or randomize() will fail.
-------------------
class c;
 ... rand struct packed {...} message; ...
 static constraint c1 {clk >= min_clk;} => NOTE: no semicoln at end
 static constraint c2 {message.DLC <= 8;} => or {message.DLC inside {[0:8]};}. forces DLC rand value to be in between 0 to 8
endclass

coverage: functional and code.
---------
1. covergroup:  is like a user defined type that encapsulates and specifies the coverage.  It can be defined in a package, module, program,  interface or class. covergroup has one or more coverpoint (each of which may have multiple bins)
ex:
class test_cov extends uvm_seq;
 apb_seq_item t;
 covergroup i2c_cg; [or additional cond can be specified for sampling => covergroup i2c_cg @(posedge clk iff (resetn == 1'b0)); //samples automatically on+ve clk while resetn=1]. sampling can also be done by explicitly calling .sample method, when sampling is reqd based on some calculations rather than events.
  i2cReg: coverpoint (t.data) iff (t.addr == `I2C_CTRL) { //if bins not specified, default bins are created based on all possible values of coverpoint. In this case, for all values of t.data
         wildcard bins i2cEn         ={16'b???????????????1}; //bin count is inc every time t.data LSB=1.
         wildcard bins i2cMode       ={16'b??????????????1?, 16'hFF0F}; //inc if any of these 2 values matches
                  bins invalid       =default;
      }
  CtrlReg: coverpoint digtop.porz; ...
  option.comment = "CG for I2c";
 endgroup : i2c_cg
 ...
 //function to sample
 function my_sample(apb_seq_item t);
  this.t=t;
  i2c_cg.sample; //sample is std keyword to sample variable. this triggers sampling of i2c_cg, when automatic sampling is not anabled above.
 endfunction
 //function to create new inst of i2c_cg
 function new (string name="i2c_cov",uvm_component parent=null) //string is valid type in SV
  super.new(name,parent);
  i2c_cg = new();
 endfunction
endclass

cg cg_inst = new; => creates inst. could have also created it by calling function new.
initial begin
 cg_inst.sample();//this causes it to sample all coverpoint in cg. or this sample can be called within a read task or something, when we know that sampling of this only needs to happen when we are in read task.
 //my_sample(t); //alt way of sampling by calling func my_sample
end

 



mailbox:

A mailbox is a communication mechanism that allows messages to be exchanged between processes. Data can be sent to a mailbox by one process and retrieved by another.
Mailbox is a built-in class that provides the following methods:
--Create a mailbox: new()
--Place a message in a mailbox: put()
--Try to place a message in a mailbox without blocking: try_put()
--Retrieve a message from a mailbox: get() or peek()
--Try to retrieve a message from a mailbox without blocking: try_get() or try_peek()
--Retrieve the number of messages in the mailbox: num()

 



program:

similar to module, but we do not want module to hold tb as it can have timing problems with sampling/driving, so new "program" created in SV. Just as in module, it can contain port, i/f, initial, final stmt. However it cannot contain "always" stmt. It was introduced so that full tb env can be put into program instead of in module, which is mostly used for describing h/w. Thus it separates tb and design, and provides an entry point to exe of tb.
NOTE: program can call task/func inside module but NOT vice-versa.

ex:
program my;
 mailbox my_mail; int packet;
  initial begin
   my_mail = new(); //creates inst of mailbox
   fork
    my_mail.put(0); //puts masg "0" in mailbox
    my_mail.get(packet); //gets whatever is in mailbox, and assigns it to int packet. here packet gets value of 0
   join_any
  end
endprogram

ex:
program simple (input clk, output logic Q);
 env env_inst; //env is a class defined somewhere
 initial begin //tb starts exec
   ... @(posedge clk); ...
   env_inst = new(); //call class/methods
   tb.do_it(); //calling task in module
 end
 task ack(..); ..
endprogram

module tb();
  ... always @(clk) ...
  simple I_simple(clk,reset); //inst the program above. Not necessary, as even w/o instantiating program, it will still work.
  task ack(..); //same name task as above, but still considered separate
endmodule

 



randomize: in testbench
---------
SV provides randomize() fn, which can randomize args passed to it. optional constraints can be provided. randomize returns "1" if successful.
ex: int var1,var2; if (randomize(var1,var2) with {var1<100; var2>200;}) $display("Var=%d",var);

For classes, built in randomize() method is called to randomize var which have rand or randc attribute. "rand" distributes equally, while "randc" is cyclic random  which randomly iterates over all the values in the range and no value is repeated with in an iteration until every possible value has been assigned. pre_randomize() and post_randomize() fn are also available. when randomize() is called, firstly pre_randomize is called, then randomize() and finally post_randomize(). All contraints within class will also need to be satisfied whenever randomize() is called, else it will fail.

c test_msg;
test_msg.randomize(); => randomizes all variables in class which have attribute rand.
test_msg.randomize() with {message.DCL==4;} => randomizes all except that DCL is fixed to 4

assert: in testbench. used to verify design, as well as provide func coverage
------
In SV, 2 kinds of assertions (aka SVA=system verilog assert):
1. immediate (assert): procedural stmt mainly used in sims
2. concurrent ("assert property", "cover property", "assume property" and "expect"): these use sequence and properties that describes design's behaviour over time, as defined by 1 or more clocks

1. procedural stmt similar to if. It tests a boolean expr and anything other than 1 is failure, which then reports severity level, file name, line num and simulation time of failure.
ex: A1: assert !(wr_en && rd_en); => will display error if wr_an and rd_en are "1" at same time.
ex: assert (A==B) $display("OK"); else $warning("Not OK"); //we can write our won pass/fail msg. $fatal, $error(default), $warning and $info are various severity levels.
ex: similar code written in verilog will take couple of lines.
begin: A1
 if (wr_en && rd_en) $display ("error");
end

2. concurrent stmt: here properties are built using sequences which are then asserted.
ex: assert property !(rd && wr); => This checks for this assertion at every tick of sim time.
ex: A1: assert property (@(posedge clk) !(rd_en && wr_en));  => A1 is label, property is a design behaviour that we want to hold true. event is the posedge clk, and on that event we look for expr !(rd_en && wr_en) to be true. Here assertion is checked for only at +ve clk. It samples rd_en and wr_en at +ve clk, and then checks at that clk edge. Usually it's temporal, i.e with some delay (#5), so that any delays in gate sims are accounted for.
ex: assert property (@(posedge Clock) Req |-> ##[1:2] Ack); => here Req and Ack are sampled at +ve clk, then whenever Req goes high, Ack should go high in next clk or following clk

system tasks $assert, $asseroff, $asserton etc also available (however these may not work on all simulators, so better to stick to SV assert cmd, and use tasks to turn asserts off, etc)

implication construct: allows to monitor seq. If LHS seq matches, then RHS seq is evaluated. So, this adds coditional matching of seq.
1. overlapped: |-> if there is match of LHS, then RHS is evaluated on same clk tick.
ex: req |-> ack //expr is true if req=1 results in ack=1 in same cycle
2. non overlapped: |=> if there is match of LHS, then RHS is evaluated on next clk tick
ex: req |=> ack //expr is true if req=1 results in ack=1 in next cycle (i.e ack is delayed by a cycle)

sequence delay:
ex: req ##[1:3] ack //expr is true if req=1 results in ack=1 in 1 to 3 cycles later
ex: req ##2 ack |-> ~err //expr is true if req=1 results in ack=1 2 cycles later (i.e ack is delayed by 2 cycles exactly). Then in that cycle that ack goes high, err=0 for implication to be true. ##2 equiv to ##[2:2]

functions:
1. $past(A) => past func. returns val of A on prev clk tick.
ex: req ##[2:4] ack |=> ~$past(err) //after ack goes high, err=0 the same cycle (|=> implies next cycle, but $past implies prev cycle, so effectively it's current cycle) for implication to be true.
2. $rose(A), $fell(A), $stable(A) => assess whether a signal is rising, falling or is stable b/w 2 clk ticks.
ex: req ##[2:4] ack |=> $stable(data) //data should be stable the next cycle after ack goes high

------
sequence request //seq is list of bool expr in order of inc time.
    Req; => Req must be true on current tick/clk (since no ## specified)
endsequence

sequence acknowledge
    ##[1:2] Ack; //##1=> Ack must be true on next tick/clk. ##[1:4]=> Ack must be true on 1st to 4th tick/clk anytime
endsequence

property handshake;
    @(posedge Clock) request |-> acknowledge; //ack must be true within 1-2 tick after req is true. implication const here adds if stmt, saying only if "req" is true, go ahead and eval "ack"
 // @(posedge Clock) request acknowledge; //here implication const not used. It means that on every +ve clk, both seq must be true, i.e: Req must be true and within 1-2 cycles after first +ve edge, ack must be true.
//  @(posedge Clock) disable iff (Reset) not b ##1 c; //here check is disabled if Reset=1. If Reset=0, then seq "b ##1 c" should never be 1.
endproperty

assert property (handshake); //property asserted here

---

bind: Assertions can be added in RTL, but when we want to keep assertions separate from RTL file, we need bind stmt to bind assertions to specific instances in RTL. bind works just like other module instantiation where we connect 1 module ports to other module ports:

bind digtop assertion_ip U_binding_ip (.clk_ip (clk), .... ); //here module "digtop" is binded to "assertion_ip" which has all assertions, and then vip_ports are connected to RTL ports. "digtop" is RTL DUT module, while "assertion_ip" is assertion module (with assertions ike property in it). Both have ports which are connected here. This bind stmt can be put in top level Tb file.


-----------



###################################
tasks/functions:
--------------
In verilog, args to tasks/functions can only be passed thru value. This involves making a local copy of args and then working on local copy, without modifying the original. However, sometimes we need to modify args globally within task/func. SV provides "pass args by reference" too to solve this problem.
1. pass by value: function int crc( logic signal1);
2. Pass by reference:  function int crc(ref logic signal1); => ref implies pass args by reference.

###################################



Arrays: In verilog, arrays are static (fixed size of array), however, SV allows dynamic arrays similar to malloc in C.

Static array:


Both packed and unpacked  arrays are available in both verilog/SV.
Ex: reg [7:0] a_reg_array; => packed array since array upper and lower bounds are declared between the variable type and the variable name. packed array in SV is treated both as array and a single value. It is stored as a contiguous set of bits with no unused space, unlike an unpacked array.
ex: bit [3:0] [7:0] bytes_4; // 2D packed array. 4 bytes packed into 32-bits
bytes_4 = 32¡¯hdead_beef; //here bytes_4[3]=8'hde, bytes_4[0][1]=1'b1. so bytes[3] to bytes[0] are in 32 bit single longword.
Ex: reg a_reg_array [7:0]; => unpacked array since array bounds are declared after the variable name

an array may have both packed and unpacked parts.
Ex: reg [7:0] reg_array [3:0][7:0];
Ex: bit [3:0] [7:0] barray [3]; // Packed: 3x32-bit, barry[0],[1],[2] are all 32 bit longword
barray[0] = 32'h0123_4567; barray[0][3] = 8'h01; barray[0][1][6] = 1¡¯b1; //Any bit/byte/word can be accessed

NOTE: reg a_reg_array [8]; => this is a compact form and has array [7:0]


Dynamic array:

Dynamic arrays in SV have limitation that dynamic part of the array must be of unpacked nature, and be 1 dimensional.
Ex: reg [7:0][3:0]      a_reg_array []; // dynamic array of 2 packed dim.
Ex: a_reg_array = new[4]; //here we allotted size of 4 to a_reg_array. i.e a_reg_array[3] to  a_reg_array[0]

NOTE: In verilog, we can't have 2D arrays as ports, but in SV, we can have that. So, during simulation, we provide +sv option for irun, so that verilog files are treated as SV files, and so 2D ports don't throw out an error.
Ex:    
 wire [8:0]     tb_2d_table[15:0]; => 2D array defined
 digtop dut (.y_2d_table(tb_y_2d_table), ... ); => In verilog, this would be illegal, but works in SV.

 

Queue:

in queues, size is flexible. It's single dim array, with all queue op permitted (sort, search, insert, etc). queues are similar to dynamic arrays, but grow/shrink of queue doesn't have performance penalty similar to dyn array, so better. No new() needed for queue.
ex: int q[$] = {0,1,3}; //defines queue with q[0]=0. q[1]=1, q[2]=3. q[$] defines queue.
q.insert(1,5); //inserts 5 at [1]st position, so new q={0,5,1,3}. displaying out of bound will cause error (i.e q[9])

associative array:

generally used for sparse mem. dynamically allocatted and are non-contiguous
ex: int aa[*]; //defines asso array

Foreach:

For looping thru array elements, we use for loop in verilog. However, we need to know array bounds, or we may go over the bounds and cause issues down the line (most of the tools do not report this as error). foreach in sv solves this issue.
ex: in verilog:
int arr[2][3]; => valid range from arr[0][0] to arr[1][2]
for (int i=0; i<=2; i++)
 for (int j=0; j<=3; j++)
   $display("%x",arr[i][j]); => NOTE: arrays arr[2][3] accesssed which is out of bound.

ex: in sv:
int arr[2][3];
foreach (arr[i][j]) => will automatically go thru all valid range. can also use for queue => foreach (q[i])
 $display("%x",arr[i][j]); => NOTE: This will display all arr vlues from arr[0][0] to arr[1][2]

foreach (md[i,j]) // NOTE: we don't have m[i][j] as in verilog code above.
 $display("md[%0d][%0d] = %0d", i, j, md[i][j]);

 



Interface:

An interface is a new sv construct. An interface is a named bundle of wires, similar to a struct, except that an interface is allowed as a module port, while a struct is not. So, when there are bunch of wires to be connected at multiple places, we can define them in an interface, and use that interface to make connections, saving typing. Adding/deleting signals is easy as only interface definition needs to be modified. Connections b/w modules remains the same.

The group of signals defined within i/f are declared as type logic, as they are just wires/nets. All of these nets are put inside a "interface" (similar to module). Now this interface can be inst just like module, and can also be connected to port like signal. To connect it as port, we just treat this instantiated interface as a net, and connect it to module port, just as we connect other nets to module ports. Interface has lot more capabilities than just being a substitute for a group of signals. It can have parameters, constants, variables, functions, and tasks inside it too. It can have assign stements, initial, always, etc similar to what modules can have.

Interface was inteneded to connect design and testbench, as that's where we had to either manually type all port names or use SV .* (provided port names were same on 2 sides) to connect all ports from TB to DUT. However, interface is now used within DUT and all Synthesis tools understand how to synthesize interface correctly.

Link => https://verificationguide.com/systemverilog/systemverilog-interface-construct/

Duolos 1 pager => https://www.doulos.com/knowhow/systemverilog/systemverilog-tutorials/systemverilog-interfaces-tutorial/

Interface definition:

interface intf; //signal addition/deletion is easy as it's done only at this place. Everywhere else, intf is just instantiated, and called as 1 entity.
  logic [3:0] a;
  logic [3:0] b;
  logic [6:0] c;
endinterface

Interface instantiation:

intf i_intf(); //this i_intf handle can now be passed to various modules.

Interface connections:

adder DUT ( //Here adder module has 3 separate ports. It may also have intf port as "intf intf_add" instead of having 3 separate ports. In that case, we can do connections as "adder DUT (.intf_add(i_intf))"
  .a(i_intf.a),
  .b(i_intf.b),
  .c(i_intf.c)
);

We can access/assign values of a,b etc as follows:
i_intf.a = 6;
i_intf.b = 4;
display("sum is %d", i_intf.c)

Virtual Interface:

SV interface above is static in nature, whereas classes are dynamic in nature. Because of this reason, it is not allowed to declare the interface within classes, but it is allowed to refer to or point to the interface. A virtual interface is a variable of an interface type that is used in classes to provide access to the interface signals.

Good example here: https://verificationguide.com/systemverilog/systemverilog-virtual-interface/

syntax: virtual interface_name inst_name => We just prepend the keyword "Virtual" before the defintition.

virtual intf i_intf()


Modport:

A new construct related to interface which provides direction information (input, output, inout or ref) to wires/signals declared within the interface. The keyword modport indicates that the directions are declared as if inside the module to which the modport is connected (i.e if modport i/f is connected to RAM, then modport dirn is one seen from the RAM). It also controls the use of tasks and functions within certain modules. Any signal declared as i/p port in modport can't be driven and will cause compilation error. So, using modport, we can restrict driving access by specifying direction as input. We can access wires declared in the modport in same way as in interface, except that there's one more hier in the name.

Ex here: https://verificationguide.com/systemverilog/systemverilog-modport/

My understanding is that if there was no modport, we wouldn't be able to assign "interface" sigmals to module defn, as module defn has i/p and o/p ports, while Interface only has logic or nets. By having modport with appr i/p, o/p defn, we can now assign these to module defn as shown below for TestRAM.

ex:
interface MSBus (input Clk); //interface may not have any i/p, o/p ports
  logic [7:0] Addr, Data; //nets used internally
  logic RWn;
  modport Slave (input Addr, inout Data); //internal nets declared as i/p and i/o for "Slave" instance. We can't drive/assign this net "Addr" anymore, and any attempt to do this will lead to compilation error - "Error-[MPCBD] Modport port cannot be driven"
  modport Master (output Addr); //internal net "Addr" decalred as o/p for "Master" instance.
endinterface

interface trim_if();
  logic clk, read, ...; //all dig/ana ports are connected to these logic signals "clk, read". This would not be true if dig,ana would be module (since then connectivity would have to be provided using .clk(clk) connections), but since these are modport, connectivity is assumed if same name for port/logic provided.
  modport dig (input clk, output read, ...); //all ports here used below in TestRam
  modport ana (...);
endinterface

module RAM (MSBus.Slave MemBus); // Here, MemBus is defined of type MSBus.Slave which is a modport, and has Addr=i/p and Data=inout which are assigned to RAM ports. MemBus is the port (Addr and Data are within MemBus modport i/f)

endmodule


module TestRAM (input a, trim_if.dig ram_if, output b,...); //module port defined as interface that has all other I/O ports. So, i/f port behaves as a bus with appropriate direction of bits, and can be accessed in simvision by expanding the i/f.
  logic Clk;
  trim_if my_if(); //defined my_if as of type trim_if. So my_if contains everything defined within trim_if. If we use my_if.dig to connect any other interface which is also of type trim_if.dig, then those i/o pins defined within dig get connected. So, saves typing as multiple signals get connected as single bus i/f.
  mod1(my_if); mod2(my_if); //here mod1 and mod2 modules are connected via bunch of wires (clk, read, etc) in my_if interface.  
  assign ram_if.clk = MCLK; //i/f signals can be assigned.
  MSBus i_bus(.Clk(Clk)); //instance the i/f. Now any signal from MSBus can be accessed.
  RAM TheRAM (.MemBus(i_bus.Slave)); //connect the i/f (Here MemBus needs to be defined of type i/f inside RAM module, or be defined as 8 bit addr and 8 bit data).
  assign i_bus.Slave.data = signal3; //signals can be assigned to i/f.
  ...
endmodule

Tasks/Functions in Interfaces:

Tasks and functions can be defined in interfaces to model to allow a more abstract level of modelling. We can call this from inside the TB module, and drive values on i/f to test the DUT.

 



Clocking Block:

A clocking block specifies timing and synchronization for a group of signals. It's usually for testbenches to synchronize clocking events b/w DUT and TB. Clocking block can be declared in interface, module or program block. We have i/p, o/p signals in a clocking block, along with optional skews that specify the delay of these signals. These delay are called skew and must be a constant expression and can be specified as a parameter. In case if the skew does not specify a time unit, the current time unit is used.

ex: In below ex, cb is defined as clocking block. It's synchronized on +ve edge of "clk", which is called the clocking event. "from_Dut" is i/p signal that is sampled at #1 time units before the +ve edge of clk (it's modeling setup time of 1 time unit). "to_Dut" is o/p signal that is driven #2 time units after the the +ve edge of clk (it's modeling c2q delay of 2 time units).

clocking cb @(posedge clk);
  default input #1 output #2; //Instead of defining default skew here, we may also specify skew with each i/p, o/p signal.
  input  from_Dut; //to specify skew here, we may write "input#1ps from_Dut;"
output to_Dut;
endclocking

@(cb); //this is the clocking block event that waits on "cb" block. This is equiv to @(posedge clk); We don't have to specify clk explicitly

Cycle Delay ##: #delay is used to specify the skew or delay in absolute time units. ## is a new one used to specify delay in terms of clk cycles. i.e ##8 means "wait 8 clk cycles".

 



create sine wave in system verilog:

module tb();
import "DPI" pure function real sin (input real rTheta);
   initial begin
      for (int i = 0; i < 120000; i++) begin
     time_ns = $time * 1e-9; //In this case timescale is in ns. But $time just gives the raw number. So we convert it into ns.
     sine_out = (sin(2*3.14*1000*time_ns));//sine_out is a real number. freq=1000=1KHz. So, in 10^-3sec=1ms, this should go from 0 to 2pi. So, $time will need to goto 10^6 units. Here 10^6ns or 1ms, which is what is expected. So, everything consistent. If timescale changes, we have to change the multiplying factor, or else freq will be off by orders of magnitude. Try using $realtime
     #5; //delay of 5 time units (here 5ns)
      end
   end // initial begin

NOTE: if we don't multiple $time by anything, then 2*pi*1000*time_ns will be very large number, and a multiple of 2*pi. That would imply that sine_out will always be 0. However, in reality, we are using 3.14 instead of pi, so, arg of sine func is not exactly 2*pi, but has little residue. That residue keeps on increasing in each loop, and generates it's own sine wave with a very large freq. That's a bogus sine wave, and has nothing to do with the frequency we are targetting.

------------------------------
to get access to shell variables from within verilog code, do this (only valid in sv)

import "DPI-C" function string getenv(input string env_name);
string sdf_path;
initial begin
 sdf_path = {getenv("HOME"), "/software/file.sdf"};   
 $write("sdf_path = %s \n",sdf_path); => prints sdf_path as /home/kagrawal/software/file.sdf
end

------------------------------
randcase: case statement that randomly selects one of its branches.prob of taking branch is based on branch weight.
---
ex:
randcase
 3 : x = 1; //prob=3/(3+1+4)=3/8 of assigning 1 to x
 1 : x = 2; //prob=1/8 of assigning 2 to x
 4 : x = 3;
endcase

Each call to randcase statement will return a random number in the range from 0 to SUM. $urandom_range(0,SUM) is used to generate a random number. As the random numbers are generated using $urandom are thread stable, randcase also exhibit random stability.

-----------------------------
$system task added in SV-2009 to call system cmd directly.

$system: to call unix cmds (called within any module):
-----
$system("rm -rf dir1/*.v");
To call some system cmd after finish of test, do:
final $system("rm -rf dir1/*.v");


 

Formality: synopsys tool for running RTL to gate equiv (or gate to gate equiv, or rtl to rtl equiv ) using formal verification (unlike verification thru simulation, which reqire i/p patterns)
2 basic tools:
1. Equivalence checker: prove or disprove that one design is logically equiv to another. Formality an example of this.
2. Model checker: prove or disprove that a design adheres to specified set of logical properties.

Equiv checking is 4 step process:
---------------
1. Design read and elaboration => Rd set of design and library files, and elaborates them into a format ready for equivalency checking that fully represents the logic of the user-defined top-level model. During this phase, you establish the reference and implementation designs, along with corresponding compare points and logic cones.
Logic cones: consist of combo logic.
Compare points are primary outputs, internal registers, black box input pins, or nets driven by multiple drivers where at least one driver is a port or black box. The design objects at which logic cones terminate are either primary inputs or other compare points.

2. setup to preempt differences: To avoid false failures. Ex: impl design has scan added while RTL doesn't. so, set constant to disable scan during checking. We also provide guidance to assist matching, add black boxes and other constraints (ex: limit no. of i/p values that will be considered during verification)

3. matching: match each primary o/p, seq element, blackbox i/p pin and qualified net in impl with a comparable design obj in ref. This one to one correspondence is not required for an impl that contains extra PO, or when there are extra registers in ref or impl design, and no compare points fail during verif. matching techniques to map compare points:
A. name based (automatic) => exact name based or name filtering (i.e memreg__[1][0] is matched to MemReg_1_0)
B. non-name based. First the tool tries topological equiv (by looking at fanin cone toplogy), and then signature analysis(by doing functional signature which are derived from random pattern generator, and by topological signature, which are derived from fanin cone toplogy). User can then provide user provided mapping points where the tool has problem.

4. verification: Fro every i/p patter, ref and impl designs give same response. 2 types of design equiv:
A. Design consistency => verif passes if "x" in ref, matches with "0 or 1" in impl.
B. Design equality => additional requirement that verif will pass only if "x" in ref matches with "x" in impl.

cmd line i/f: fm_shell
-------------
#start formality on cmd line
fm_shell -2012.06 -f scripts/rtl2gate.tcl |tee logs/rtl2gate.log

#DW lib are here: /apps/synopsys/syn/2008.09-SP5/dw/dw*/src/* (both in .v and .vhdl)
set hdlin_dwroot /apps/synopsys/syn/2008.09-SP5/ => specifies DesignWare root. By default, its empty meaning DW instances will be left elaborated as black boxes.
set enable_multiplier_generation true => enable Formality to generate multiplier architectures for all multiplier instances in  the  reference  design.
set hdlin_multiplier_architecture "dw_foundation" => this variable helps define the architecture Formality will generate for multiplier or DW02_multp instances encountered in the reference design   when  multiplier  generation  is  enabled. Arch may be none, csa, nbw, wall or dw_foundation. dw_foundation  - Attempt to choose the same architecture Design Compiler was likely to have chosen for each  multiplier  instance.

#create_container: creates empty container and establish it as current. this cmd loads the GTECH tech library, and any other shared tech libraries, into the new container. All of the info about 1 design (such as design lib, tech lib, etc) are contained in 1 container, and for other design in other container.
#create containers for both rtl and gate. RTL doesn't need tech lib in it's container, but impl needs as it has std cells.
create_container rtl1
create_container gate1

#read_db: reads designs or tech lib into the current container. Unless  you  specify  the name of the design or tech library, the command uses the default design library named WORK and the default tech library named TECH_WORK.
#NOTE: our designs (digtop) are NOT in .db format, but in verilog format, so we use read_db to read tech library only. read_verilog is used to read design.
#When we don't specify the container name, read_db reads tech lib into all open containers (so it reads in both rtl1 and gate1). To read it in impl(gate) container, we can add option -i, or to read it in ref(rtl) container, we can add option -r.  
read_db -tech /db/pdkoa/lbc8lv/.../synopsys/bin/MSL445_W_125_1.6_CORE.db => read tech lib for core cells in both containers
read_db -tech /db/pdkoa/lbc8lv/.../synopsys/bin/MSL445_W_125_1.6_CTS.db => CTS cells
read_db -tech /data/SPORSHO_2P0_OA_DS/ .../HDL/Memories/fedc01024064012/cad_models/fedc01024064012_W_125_1.6.db => read fram, sram cells (needed even though we blackbox these cells, otherwise elaborate and link (set_top cmd) won't be able to link these cells).

#Note that we are reading tech lib in .db format instead of reading verilog model files. We could have read verilog model files for these std cells, which would have yielded same result. It's better to use verilog model files since we already synthesized using liberty files, so running Formality with verilog model files makes sure that liberty and verilog models are in sync. However, verilog model must be in form of synthesizable RTL or structural netlist (no behavioral constructs or simulation models allowed).
#read_verilog -i -tech /db/pdk/lbc8lv/rev1/diglib/msl445/r2.3.0/verilog/models/*.v => -tech specifies that data goes into tech lib instead of design lib. -i means it's for impl, while -r means it's for reference. -con gives name of container where it should go (when we have more than 1 ref container). This cmd creates efficient gate level models from verilog modules and user defined prmitive desc in verilog models. These new gate level models are then used during verif.

# This is needed when DC added clock gating cells (Absoultly necessary to use this). Use  this  variable  to  specify  whether Formality allows designs with clock-gating to successfully verify against designs without  clock-gating. By  default,  Formality  does  not  consider  such  designs equivalent, because of the functional difference that could  occur  if  the  gating logic introduces an edge on a flip-flop clock pin that may not occur if the same input patterns are applied to the non-clock-gated circuit. values are none(default), low, high or any. considers latch-based clock gating, and combinational clock gating. It checks for glitch violations on comb clk gating designs. "any" implies either hold clk low (for +ve edge flop, impl using and gate or high for -ve edge flop, impl using or gate) when inactive , or hold clk high (for +ve edge flop or low for -ve edge flop) when inactive
set verification_clock_gate_hold_mode any

#Clk gating results in 2 failing points:
1. clk gating latch is a compare point in impl, but doesn't have matching point in ref
2. logic feeding into clk of flop changes, so compare point created at flop fails.

# Read SVF file, set_svf sets SVF. The SVF provides valuable information that can be used during compare point matching  to  facilitate alignment of compare points in the designs to be verified. Filenames Specifies the name of the SVF files to read  or  directories  to search. SVF file is generated by DC (we use set_svf in DC script to set path for svf file, DC auto generates this svf file whenever we exit DC. If we don't exit DC then svf is stale file), and contains info about tranformed names and compare points to be used in formality. It's a binary file. Without svf, Formality will usually fail matching.
set_svf /db/Hawkeye/design1p0/HDL/Synthesis/digtop/svf/digtop.svf

#read_verilog: Reads one or more Verilog files (RTL or structural). Designs (RTL/structural verilog) are  read into  design  libraries, and Verilog library cell descriptions (AN20.v, INV10.v, etc) are read in as technology libraries. By default, the tool  places  designs  into the  default  design library named WORK (unless we use -libname option), and cell descriptions into the default technology library named TECH_WORK. These are placed in container specified (FM_WORK/rtl1/WORK/). option -95|-01|-05|-09 specifies IEEE std for verilog (1995,2001,2005,2009 default being 2005). -vcs "VCS_OPTIONS" reads in vcs options for reading in dir(-y) etc. This is helpful when reading large number of files in a dir. However, -vcs option doesn't seem to work.
# Read RTL into RTL container
read_verilog -container rtl1 -libname WORK -01 { /db/Hawkeye/design1p0/HDL/Source/golden/global.v
/db/Hawkeye/design1p0/HDL/Source/golden/digtop.v ... one line for each file ...." } -vcs "-v extra.v -y libs/lib1" => NOTE: -vcs option doesn't work. It's ignored by tool.
#we can also read multiple files by using tcl list cmd. We set RTL_DIR by using tcl cmd: set RTL_DIR ../source
#read_verilog -container rtl1 -libname WORK -01 [list \
                 "$RTL_DIR/ahb_apb_bridge.v"  ... \
                 "$RTL_DIR/cm0ik_ahb_fram_bridge.v" ]

# Elaborate and link design. set_top Resolves cell references and elaborates RTL designs. We can use "set hdlin_auto_top true", which causes Formality to automatically determine top level module.
set_top rtl1:/WORK/digtop => here we say that top level module is digtop. If we want to do hier matching, we can choose one of the sub-modules as top level module. then matching will proceed starting from that module.

# Read Netlist into Gate container. We use -netlist option for structural verilog, since it reads it faster this way.
read_verilog  -container gate1 -libname WORK -netlist /db/Hawkeye/.../digtop_final_route.v

# Elaborate and link design
set_top gate1:/WORK/digtop => for gate netlist, we set top level module to digtop. Again, we can choose some lower level module as top-level module if we want to do hier matching starting from some lower level module.

#dir structure:
--------------
FM_WORK dir is where Formality keeps all designs. It has these dir:
1. Tech lib: dir MSL445_W_125_1.6_CORE.db,MSL445_W_125_1.6_CTS.db etc. are created when we run "read_db -tech"
2. design lib: dir for container rtl,gate etc are created when we run read_verilog for ref/impl. Inside these design lib, we have WORK, TECH_WORK and FM_BBOX dir. WORK contains dir for digtop and all other modules, which contain respective modules in .dmp binary format. TECH_WORK is not created in our case, since tech lib are read as db and not as verilog, so they are kept separately as tech lib. to get TECH_WORK dir, we have to read tech lib as "read_verilog -tech"
3. GTECH, r, i dir are created by default.
--------------

# Set RTL as "reference" design and GATE as "implementation" design
set_reference_design  rtl1:/WORK/digtop
set_implementation_design  gate1:/WORK/digtop

# Set constants.  type can be port, pin, net or cell(register)
#We disable scan, since scan flops are not present in RTL. ScanEn pin of all flops is tied to 1, so that mux inside flops only selects D pin.
set_constant -type port rtl1:/WORK/digtop/scan_mode_in  0
set_constant -type port gate1:/WORK/digtop/scan_mode_in 0
set_constant -type port rtl1:/WORK/digtop/scan_en_in  0
set_constant -type port gate1:/WORK/digtop/scan_en_in 0

#to force some cell to a constant value for debug purpose
#set_constant -type cell rtl1:/WORK/apb_cpsw/cpsw0_reg[0] 1
#set_constant -type cell gate1:/WORK/apb_cpsw/cpsw0_reg_0 1

#setting black boxes for IP/Macro. Needed when we want to rep logic that's unknown. i/p pins of blackbox become compare points, while o/p are treated as i/p points to other logic cones. blackbox in impl is considered equiv to blackbox in ref design, so any mismatch within the black-boxed design won't be caught. We sometimes have to set pin/port dirn for these blackboxes as Formality may not be able to determine dirn, and will assume the pin/port as bidir. Use "set_direction" cmd.
#set_black_box rtl1:/WORK/dig_top/u_8kB_fram/u_fram => omiting fram instance (particular instance name and not module name) for matching from rtl container. u_8kB_fram is the fram_wrapper, while u_fram is the actual fram module(fedc01024064012). sometimes we might need to blackbox the wrapper as pins on actual fram might be different in impl because of extra buffered clk pins, changed names of some pins, etc.
#set_black_box gate1:/WORK/dig_top/u_8kB_fram/u_fram

#set constraints like 1 hot(One control point at logic 1; others at logic 0), 1 cold(One control point at logic 0; others at logic 1), coupled(related ctl points always at same state), Mutually exclusive(two ctl points always at opposite state) or user defined (user defines the legal state of the control points).
#set_constraint 1hot {Q_reg[0] Q_reg[1] Q_reg[2]} ref:/WORK/digtop

#report setup statistics before running match and verify
report_setup_status => reports all warning,setup and other stats

# Match mapped points. Formality performs matching and reports summary. If unmapped points remain, You can issue commands that control matching (such as  set_compare_rule or  set_user_match) .
match

#name based mapping for eco fixes, as eco fixes may change names of some flops (due to swapping, spare cell use, etc)
#set_user_match [-type <pin|port|net|cell>] [-inverted|-noninverted] RTL_OBJ_ID GATE_OBJ_ID =>
#-type is only needed if name of specified design object is associated with more than one object type.
#-inverted/noninverted specifies if design obj have inverted or noninverted relationship (default is unknown relationship. If "verification_inversion_push" is not enabled, then all unknown polarities will default  to  noninverted. If "verification_inversion_push" is enabled, Formality will try to determine  the  polarity  for  registers that  have unspecified polarities).
ex: set_user_match rtl1:/WORK/digtop/sray_regs/deadtime_reg[0]  gate1:/WORK/digtop/sray_regs/deadtime_reg_1 => 1 will be o/p for success, while 0 is o/p for failure. Make sure you get "1" after running this cmd.

#set_user_match cmd is needed when doing block level verification for clk pins, as clk pins might be buffered and be named differently, so need to match as follows:
#set_user_match r: /WORK/design/clk i:/WORK/design/clk_L0_buf

#actual mapping (user defined match are applied at match. To remove user match, do undo_match, and then reissue match cmd)
match => match ensures that there are no mismatched logic cones, so that Formality can proceed with verification.

# Report unmatched points
report_unmatch => we should see any unmatched points, except for clk-gat latches in clk-gated designs.

# Verify and report success. All compare points are verified in reference and impl, and summary shows: passing (all compare points are equiv), failing(some compare points are non-equiv), aborted(when compare couldn't be identified as passing or failing. happens due to combo loop or compare points too difficult to verify) and not compared (some compare points are unverified or not verified. this happens when failing point limit has been exceeded or there was some run error). Based on this, final ver result is succeeded, failed or inconclusive(aborted or not comapred).
verify => any compare points unmatched by match cmd above, and tried to be matched here. verify cmd runs match if match hasn't been run before.

# report Combinational Loops
report_loops -ref => there should be no loops in ref design
report_loops -impl => there should be no loops in gate design

# Report failures
report_fail

# analyze src of failure (shows possible src of failure)
analyze_points -failing

#start gui to help in debug (shows all failing patterns)
start_gui

exit

RESULTS:
----------

voltus:
------
power analysis tool (cadence). Cadence® Voltus IC Power Integrity Solution is a full-chip, cell-level power signoff tool that provides high-capacity analysis and optimization technologies to designers for debugging, verifying, and fixing IC chip power consumption, IR drop, and electromigration (EM) constraints and violations.

voltus repalced EPS 13.2 (Encounter Power System)
version 16.1: analyze pkg data with Sigrity, Innovus gui,
 - pwr calc for static and dynamic
 - EM/IR on Power grid (PG) for static and dynamic
 - static timing due to IR drop impact
 - decap opt: insertion/removal
 - power gating switches on ramp-up, steadystates

A=activity (net switching from 0->1 or 1->0 in 1 clk cycle). A of clk = (1+1)/1 = 2
D=duty cycle, % of time net has value 1 during sim time
TD=transition density = num of times signal toggle from 0->1 or 1->0 in 1 sec. TD=A*F. So, for clk=1MHz, TD=(1+1)/1us=2e+06

static pwr: avg activity calc for each net, and pwr reported for the whole sim window as 1 pwr number. so, pwr at each time is not known, it's avg pwr over whole window. so called static. (k-factor pwr scaling parameters for PVT can be provided, which will scale pwr num accordingly)
 - switching pwr = C*V^2*f = 1/2*C*V^2*F*A = 1/2*C*V^2*TD (factor of 1/2 is added since A=TD=2 for clk, while in reality it should be 1)
 - Int pwr = from .lib (both int switching and int feed thru pwr). It's energy (Joules), not power(Watts). If lkg power is in pW, then energy is in pJ. Int pwr can be on both i/p pin (mostly for macros) and o/p pin (mostly for stdcells). i/p pin pwr can be state dependent (what state other i/p pins are at), o/p pin pwr is based on i/p slew rate + cap load LUT. Since P=Energy*Freq*AF, P=1/2*(Erise + Efall)*TD => Energy stored in cap is 1/2*C*V^2, equiv amount is lost in resistor, since supply provides C*V^2. So, in 1 charge and discharge of cap, C*V^2 energy is lost in resistors.
 - lkg pwr = from .lib (state dependent lkg, genric lkg pwr also provided for states not defined)

ex: Int pwr: for 2 i/p AND, we see int_pwr showing on o/p Y (related to pin A, and then for pin B). We calc TD(A)=tran density on pin A, TD(B), and calc int_pwr of Y = 1/2*(Erise(A)+Efall(A))*TD(A) + 1/2*(Erise(B)+Efall(B))*TD(B) => see pg 153 of voltus user guide.
If TD(A)=1000/sec, TD(B)=2000/sec, then B's contribution to pwr at Y is 2X that from A.
ex: see pg 190 of voltus user guide. If gate is XOR gate, then TD(Y) != TD(A)+TD(B). since int_pwr is mostly reported for o/p pin, we need to look at TD(Y) to calc pwr at pin Y. We calc pwr(A->Y)(assuming Y switching coming exclusively from A switching) and pwr(B->Y)(assuming Y switching coming exclusively from B switching). Then we divide that pwr depending on TD(A) and TD(B) as follows:
Int_pwr(Y) = [pwr(A->Y)*TD(A) + pwr(B->Y)*TD(B)] / [TD(A)+TD(B)]

ex: Int pwr for ram macro: int_pwr defined on clk pin only for these 3 conditions: (clk pin is chosen for power reporting since EZ/WZ signals on clk edge decide what mode macro is in. We monitor pwr for both high/low of clk, but since EZ/WZ don't change on -ve edge of clk, pwr is calc correctly, as EZ/WZ remain stable for 1 full clk cycle, so whatever is EZ/WZ status on +ve edge of clk is what is used for pwr calc.
1. idle: when: "(EZ)"; for various i/p slew rates on CLK = 4pJ
2. read: when: "(WZ&!EZ)"; for various i/p slew rates on CLK = 50pJ
3. wrt:  when: "(!WZ&!EZ)"; for various i/p slew rates on CLK = 57pJ
Int_pwr for macro calc similar to AND gate above. TD(clk) is calc, and then see what probability of time these 3 cond appear, and then calc pwr
Int_pwr = TD(clk)*[P(1)*E(1) + P(2)*E(2) + P(3)*E(3)]/(P(1)+P(2)+P(3)]
ex: TD(clk) 5 toggles in 20us time chosen from VCD. For EZ=1, E=4pJ => P=E*TD=4pJ*5/20us=1uW. Voltus reports .001mW in .rpt, which matches our calc.
ex: TD(clk) 4 toggles in 20us time chosen from VCD. For 8.3us, it's in idle, and renaining 11.7us, it's in wrt. So P=(E1*P1+E2*P2)/(P1+P2)*TD=(4pJ*8.3/20+57pJ*11.7/20)*4/20=7uW. voltus reports 6.7uW => close enough

dynamic pwr: It analyzes current over specified time, and calc pwr at each point in time. It can be both vectored or vectorless. Since VCD can be very large, dynamic pwr analysis should be limited to atmost 5 clk cycles of fastest/dominant clk with max switching activity. Vectorless approach generates worst case activity by propagating user supplied activitt, and so gives pessimistic results.

cmd:
---
voltus -file run.tcl -no_gui => for no gui

run.tcl:
---
1. load design/library
 - read_lib -max {CORE/liberty/MSL700_W_125_1.35_CORE_iso_pg.lib liberty/ssbwmv3m04096033080_W_125_1.35.lib ...} => these pg.lib files have PG pins (VDD/VSS) for all cells, related power pins for each i/p, o/p pin (A,Y). .lib files also have lkg pwr and internal power (for o/p pin) in pW. internal pwr is crossbar power.
  ex: inv1 => lkg pwr =25pW, int pwr = 0.006pW
 - read_lib -lef  {MSL700_tech.lef} => This has all metal/via rules and shapes.
 - read_verilog netlist.v => this reads gate level netlist
 – set_top_module dig_top => This reads all libs/verilog above

 - read_def digtop.def => def file needed for layout
 – #read_power_domain –cpf cpf1 => optional
 – read_spef -rc_corner QC_MAX_1.5 {QC_MAX_1.5_ATD_W_125_1.35_maxC_maxvia_decoupled_125.spef.gz} => spef file to get RC for interconnects. WLM can also be provided if spef not avilable. -rc_corner is optional.
 – read_sdc func.sdc => sdc file for func mode

 - source scripts/view_definition.tcl => this is from MMMC flow in PnR. create rc_corner, library_set, delay_corner, constraint_mode and analysis_view for multiple corners.
 - set_analysis_view -setup [list func_QC_MAX_1.5_ATD_W_125_1.35] -hold [list func_QC_MAX_1.5_ATD_W_125_1.35] => set view for max corner. same corner used for setup and hold as we want same kind of delays on both setup/hold. Pwr view can be set to only 1 view at a time, either setup or hold. Use "set_analysis_mode -checkType setup" to choose setup view as power view.
 – #update_timing => optional

2. setup:
 - setup switching activity
  A. vectorless: transition density and duty cycle of nets specified. At the least, sw of PI should be provided. Tool can propagate transition activity thru combo logic. activity prop thru seq cells is hard, as they mostly have loops. so, best to provide AF at o/p of seq cell.  similarly, AF at en pin of clk gaters should be provided, since activity propagation for clk en pin may prop incorrectly. For macros, AF at rd/wrt i/p pins should be provided.
     – Set up defaults for inputs/flops
              set_default_switching_activity -duty 0.5 -seq_activity 0.2 -input_activity 0.1 => 0.2 at o/p of flops
       set_default_switching_activity -global_activity 0.2  -clock_gates_output_ratio 1
     – Can also set specific activities on specific pins
              set_switching_activity -activity 0.2 -duty 0.5 -inst flopA
  B. vectored: provide VCD file from gate/RTL sim, or TCF file which provides toggle counts for each net.
     - read_activity_file -format VCD design_mode.vcd.gz start 461us -end 639us -scope usbpd_testbench_bga0/usbpd_digtop_0 => specify start/end time within vcd that is to be used (by default, entire time window is used). -scope specifies module within vcd to be used =>
$scope module usbpd_testbench_bga0 $end => line in vcd that specifies scope
$scope module usbpd_digtop_0 $end => line in vcd


3. setup and analyze power
static/dynamic power:
A.setup
GUI: pwr_and_rail->set_pwr_analysis_mode(set analysis=Static)
set_power_analysis_mode -method static \ => use -method dynamic_vectorbased for dynamic pwr
                        -analysis_view func_QC_MAX_1.5_ATD_W_125_1.35 \
                        -create_binary_db true \ => save plot data power.db
                        -use_encounter_db false \
                        -transition_time_method max \
                        -write_static_currents true \
#                        -disable_static false \
#                        -ignore_control_signals false \
#                        -read_rcdb true

B. analyze power
GUI: pwr_and_rail->run_pwr_analysis (fill tabs for basic, activity, power, advanced)
GUI: pwr_and_rail->text_rpts(pwr_analysis) => to see reports
GUI: pwr_and_rail >pwr_rail_plots (select pwr, load power.db) => will show all pwr/activity
#dynamic current plot
GUI: pwr_and_rail >dynamic results->waveforms (select pwr waveform, choose pwr db, add waveform file dynamic_VDD.ptiavg, select any inst and click plot. to see current for all of VDD, choose "total current" from composite waveform menu) => shows dynamic current in simvision. Current (NOT pwr) is shown. current shows up from time 0 to time in vcd file. We should see current spikes around clk edges. We can also plot current for all clks only. For
#pwr profiling plot
If we have pwr profiling going on, then we can choose "profiling histograms" in above case, choose same pwr db, then add *.rpt.trn waveform file (*.trn file gets generated automatically for pwr profiling), select that *.trn waveform file and click Plot. On the plot, we will see pwr (NOT current), in histogram(bars) form from start time to end time of vcd file. We see pwr histogram in widths of step (if step=1us, then for 10us vcd run time, we see 10 histogram with width of 1us each). It shows total pwr as well as switching, lkg, int as well. It shows only for top level of hier. Also, pwr number here is for each separate time step, so to calculate total pwr, we have to add pwr for all steps multiplied by each time step and then divide by total time to get dynamic pwr number. Note that this dynamic pwr number should equal static pwr number as it's just avg of pwr over whole time domain.

set_power_output_dir my_dir
report_power -outfile pwr.rpt => this dumps results in my_dir/pwr.rpt
report_power -outfile dir1/pwr.rpt => this dumps results in dir1/pwr.rpt. overrides output_dir set above.
report_power –no_wrap \ => reports staic/dynamic pwr depending on settings above
         -output staticReports \ => o/p dir that stores all pwr rpt
         -report_prefix design.power \ => o/p files prefixed with "design.power".*.rpt
         -view func_QC_MAX_1.5_ATD_W_125_1.35 \ => pwr analysis is run on only 1 view at a time
         -instances {*} \ => specifies inst to include in pwr rpt. shows rpt for each inst in instpwr.rpt
#         -cell {CTB*} => specifies cells to include in pwr rpt.
#            -format {simple|detailed} => reports pwr consumed by all nets in simple/detailed
#         -hierarchy {all} => reports for all hier level starting from top to leaf. shows rpt in hierpwr.rpt. hier level of 0 reports only for top level, while 2 will report for 2 levels below top level. default is all.
#         -net -nworst 100 => reports net switching pwr for each net in design. -nworst 100 reports only 100 nets with highest net sw pwr. useful for debug
         -create_power_db true  => creates power database

#report_power -hierarchy 3 -outfile hier.rpt => reports pwr 3 levels down
#report_power -instances all -outfile inst.rpt =>
#report_power -net -nworst 1000 -outfile net.rpt =>

#report_instance_power inst1 -outfile inst1.rpt => Generate detailed report on power calculation for specific instance. very powerful cmd, shows internal power calc method. (for dynamic pwr runs, we have to run this too: set_power_include_file)

#restore_power_database -file power.db => to restore old power db results from prev run

#vector profiling => identifies windows with max activity and power which then drives dynamic vector based pwr analysis.
 2 types of vector profiling:
1. avg vector profiling: vector profiler computes average toggle density within each step to compute and display average power profile of a VCD/FSDB file. default step size for 2 times the fastest clock.
2. event-based profiling:  vector profiler computes power profile of every event on each net. This accurately capture vectors that could produce peak power using very small resolution. default step size is 1ps.
report_vector_profile -event_based_peak_power -write_profiling_db true -detailed_report true -outfile func_power.rpt -step 1000 => -average_power does avg vector profiling. step size here is specified as 1000ns=1us. This will report intervals of 1us with max power. NOTE: step size is calc automatically if start/stop time for vcd is provided (ignores -step in that case). Then we choose time window with max power, and use that time window in vcd file to once again do vector profiling with -step 1 (1ns step). This gives us max pwr for that 1ns time window. Or, we can do this to do it all in one run:
report_vector_profile -event_based_peak_power -write_profiling_db true -detailed_report true -outfile func_power.rpt -step 1 => 1ns window
read_activity_file -reset
read_activity_file -start $worst_power_window_start -end $worst_power_window_end => stores the worst power window in these var
report_power -outfile worst_pwr.rpt

view_analysis_results => Ability to script loading power results without navigating GUI menus
view_dynamic_waveform -type profile -waveform_files func_power.rpt.trn => runs simvision to display dynamic power. *.trn is dumped auto, when "-write_profiling_db true"
write_tcf top.tcf => Dump out toggle count for every net or pin in design. Useful for comparing toggle propagation between different setups

----
effective resistance: calc eff res b/w 2 nodes on PG, or from any node/inst to voltage src. does it for all inst in design in 2 modes:
1. net based: analyze_resistance -net <net_name> (o/p=effr.rpt)
2. domain based => analyze_resistance -domain <domain_name> (o/p=domain_effr.rpt). gives Rvdd+Rvss for all nets in design

#tcl file
set_pg_nets -net VDD -voltage 1.10 -threshold 0.99 -tolerance 0.3 -force
set_pg_nets -net VSS -voltage 0.00 -threshold 0.11 -tolerance 0.3 -force
set_rail_analysis_mode -ignore_shorts true -work_directory_name work.zx -method static -accuracy hd -enable_manufacturing_effects true -power_grid_library ../accurate_stdcells.cl -temp_directory_name ./tmp.zx -cell_ignore_file ../fill.list (accuracy=xd is used for relaxed accuracy (based on lef files), while hd is used for high accuracy (based on gds files), )
set_rail_analysis_domain -name PD -pwrnets VDD -gndnets VSS => needed for domain based
set_power_pads -net VDD -format xy -file ../VDD.pp => pwr pad location has to be specified
set_power_pads -net VSS -format xy -file ..VSS.pp   => pwr pad location has to be specified
set_package -spice ../pkg.spi -mapping ../pkg.map
analyze_resistance -net VDD => for net based Reff. -node_list can be specified for exact coord where we want Reff to be measured. -node_list {{90 11 M4} {75 11 M4}}. -instance_list can be used to specify inst where we want Reff to measured. -instance_list {{INV1 vdd} {INV2 vdd}}
analyze_resistance -domain PD => for domain based Reff

------------
IR drop/ gnd bounce: IR_drop = drop in VDD, gnd_bounce=inc in VSS. causes timing problems.
---
IR drop inc if more cells switch together (i.e more I), or if line more resistive (i.e more R).

Static IR drop: avg current draw is used to calc IR drop. Ususally peak I much higher than avg I when looked at small time windows, but adding enough decoupling caps, makes this peak I smoothen out, so that static IR and dynamic IR get close enough. Typical limit for IR drop is 2-5%.
dynamic IR drop: peak current draw is used to calc IR drop. waveforms show transient I. decoupling caps reduce dynamic IR drop. Many decoupling caps are in built (as gate cap, diffusion cap, parasitic cap b/w pwr/gnd), while other decoupling caps are inserted on purpose to reduce dyn IR. too much decap will cause more pwr, as current leaks thru decap, so voltus tries to reposition existing decaps more effectively, before adding new decaps.

symptoms of IR drop:
- logical malfunction. may be timing failures. Inc voltage usually resolves it.
- data dependent failure. When some data pattern causes excessive activity, resulting in large IR drop. Dec clk freq may rsolve it.
- clock jitter. 5% IR drop on clk buffer can reduce it's speed by 15%. The drop not only reduces the logical High voltage of gates, but also slows the charging/discharging of logic as lower voltage is available now.

For 130nm and below, manufacturing effects of wire widths, etc are modeled correctly in tools. Dishing, slotting, cladding affect wire width. Erosion affects wire thickness. Density rules for metal help reduce erosion. Metal fills (either floating or tied to gnd) done at foundary step, but now they are done in design stage to model changes in cap, etc. gnd metal fill cause higher cap, then floating metal fill.

#tcl file:
set_pg_nets -net VDD -voltage 1.10 -threshold 0.99 -tolerance 0.3 -force
set_pg_nets -net VSS -voltage 0.00 -threshold 0.11 -tolerance 0.3 -force
set_rail_analysis_mode -method static -accuracy hd -power_grid_library {stdcells.cl mem.cl} //for dynamic use -method dynamic
#set_rail_analysis_mode  –report_power_in_parallel true => this allows pwr analysis to be run in parallel with rail analysis. No need to run separate pwr analysis
set_rail_analysis_domain -name PD -pwrnets VDD -gndnets VSS => needed for domain based
set_power_pads -net VDD -format xy -file ../VDD.pp => pwr pad location has to be specified
set_power_pads -net VSS -format xy -file ..VSS.pp   => pwr pad location has to be specified
set_power_data -format current -scale 1 {static_VDD.ptiavg static_VSS.ptiavg} => o/p reports
analyze_rail -type domain -results_directory static_rail PDcore => runs static IR analysis

From the results, we can get plots for IRdrop, grid_res, resistor_current, current_density, etc
read_power_rail_results -rail_directory ALL_25C_avg_1/VSS
report_power_rail_results -plot ir -filename VSS.irdrop.report => by default, all text reports are generated

------------
EM: caused by movement of atoms in wire because of high current. pwr grid which have redundant wires, exhibit higher Res due to EM, while signals which provide unique connectivity, cause total failure due to EM. shorts to neighboring wires may also cause total failure.
----
2 phenomeno causes EM:
1. wearout: metals become narrower at places where metal atoms start moving, causing wire to break. To reduce this, metal wire are built in sandwich structure with top and bottom layer being made of metal which is more resistive to EM, and central metal is real metal (for ex: Tin=Titanium nitride filled around Aluminum metal. Cu is increasingly used for metal as it not only offers lower Res, but also higher resistivity to EM wearout) This prevents total wire failure.
2. Joule heating: high ac currents may cause excessive heating resulting in thermal expansion and temperature induced EM.

EM modeled using Black's eqn. MTTF obtained using this is used to calc prob of failure for a wire. Then using prob failure for each wire, failure prob for whole chip is calc.

--------------
pwr network optimization (PNO) and ESD analysis/opt also done by voltus.

Verplex was formal verification tool developed by Verplex in 1998. Cadence acquired it in 2003, and developed it under Conformal family of tools. Conformal family of tools are:

- Conformal LEC: formal verification tool (similar to formality by Synopsys). It has basic to advanced tools starting from Conformal L, Conformal XL to Conformal GXL.
- Conformal Low Power: enables low power equivalence and func checks for isolation cells, level shifter cells, state-retention cells.
- Conformal Constraint Designer
- Conformal Custom
- Conformal ECO Designer (look in eco.txt to see how to use conformal for eco)

Conformal LEC:
--------------
Encounter Conformal (EC) Logical equivalency checker (LEC): verifies RTL, gate or transistor level design
--------------------------------------
4 variants of conformal:
Conformal L (basic LEC), XL (extends to datapath synthesis/layout), GXL (extends to custom logic/memories), LowPower(includes equivalence and functional checks for isolation cells, level shifter cells and state retention cells).

Conformal LEC is 3 step process:
1. Setup mode: Read in golden and revised design and their associated library. Designs can be in verilog/vhdl, while libraries can be as verilog library or as Liberty files. Then we specify constraints and other parameters. All these designs/libraries are translated into conformal primitive gate types, which are: AND, OR, MUX, BUF, XOR, DFF, DLAT, INV, ADD, MUL, SUB, TIE0, etc (about 100 primitives). Most of these primitives are same as those of verilog.

2. Transition from setup to LEC mode: conformal checks various rules during parsing, and reports all of the library and design rule violations that occurred during parsing. Then it flattens and models the Golden and Revised designs and automatically maps the key points.
Key points are defined as Primary Inputs(PI), Primary Outputs(PO), D Flip-Flops(DFF), D Latches(DLAT), Blackboxes(BBOX), TIE-E Gates(E:error gate, created when x-assignment exists in Revised design), TIE-Z Gates(Z:high impedance or floating signals), Cut gates(CUT:artificial gates that break combinational loops).

3. LEC mode: Conformal does mapping of key points and reports unmapped points. Then it compares these mapped points (PO, DFF, DLAT, BBOX, E, Z, CUT but NOT PI) to determine if they are equiv or not. Each Key point may have multiple pins, and when checking for equivalence, all of these pins have to be equiv for that key point to be equiv.
We can also designate mapped points for comparison manually by adding naming rules. 2 ways of comaparison:
A. Hierarchical comparison: if one of the 2 designs is RTL.
B. flattened comaprison: done when both designs are gate level

Mapping is of 2 types: Needs to be set before exiting setup mode. Default is name based mapping.
A. name based (automatic) => 3 name based mapping:
 I. name first mapping: first maps key points with same name, and then remaining key points with mapping algorithm. Any remaining points are identified as unmapped points.
 II. name guide mapping: it does the opposite of name first. It first maps key points with mapping algorithm, and then remaining key points by matching names. Any remaining points are identified as unmapped points.
 III. name only mapping: maps points only if names match. Any key poitns that don't have same name are identified as unmapped points.
B. non-name based: Doesn't involve names. 1 type of non name based mapping:
 I. no name mapping: relies solely on mapping algo to map key points. Any remaining points are identified as unmapped points.

name based mapping is preferred as the name effort (set in the SET MAPPING METHOD command) is high by default and can take care of most changes in delimiters.After name-based mapping is performed, the left over key points will have different names between the Golden and Revised designs. There are several ways you can make their names similar in both designs:
1. Naming rules => Naming rules will be applied in RTL and facilitates in name based mapping with netlist
 2. Renaming rules => Renaming rules are applied when the instance names are different in golden and revised however shares a common pattern.
 3. ADD MAPPED POINTS command => When the number of key points having different names do not have common pattern and are less in number then it is good to manually map those key points using add mapped points command.

Number of key points b/w golden and revised may be different. Tool maps same key points b/w the two. Any kep points still not mapped are classified as unmapped points. Unmapped points are classified into 3 types (each type may have key points from DFF/DLAT, E, Z, etc):
1. Extra unmapped points (E): key points that are present in only one of the designs,Golden or Revised. This E is different than E of key points which stands for Error key points.
2. Unreachable unmapped points (U): key points that do not have an observable point, such as they do not propagate to a primary output or other key points.
3. Not-mapped unmapped key points (shows as Red DOT): key points that are reachable but do not have a corresponding point in the logic fan-in cone of the corresponding design. For complete mapping, there should not be any of these. These will be need to be resolved before running compare. Once we have 0 "Not-mapped" unmapped key points, we should report all unmapped points, and make sure they are expected as either E or U.

Compared points may be equiv, non-equiv, inverted-equiv or aborted. There should be no non-equiv points for a passing design. In the case of aborted compare points, you can change the compare effort to a higher setting. Thus, Conformal can continue the comparison on only the aborted compare points.

NOTE: For the whole design, all PI, PO, flops, latches and BlackBox are mapped. Points that still remain unmapped may be due to spare flops in revised, optimized away flops that are present in RTL but not in revised, BlackBox of antenna diodes in revised, etc. Any other unmapped points will need to be mapped. If they remain unmapped, then that may indicate some real flops missing in revised. Once all points are mapped (except for unmapped points, we set compare points which are PO, flops, latches and BlackBox. If they all match, then designs are equiv, else non-equiv
 
Renaimg rules: Add renaming rules if names don't match, and we want to force the tool to map certain key points.
-------------
add renaming rule d2D '/d$' '/D' => adds a renaming rule "d2D" which renames  small d to capital D.
add renaming rule r1 {"_reg_%d"} {"_reg[@1]"} => maps _reg_1 to _reg[1]
test renaming rule fsm_state_reg_2/N01 => this tests rules against the specified object and shows the renamed result.

Conformal supports 2 types of tcl cmd:
1. native tcl cmd (not as efficient as internal C funstions). entered by typing "tclmode" on cmd line. prompt shows TCL_*>.
ex: TCL_*> set env(RC_VERSION) "RC10.1.200 - v10.10-s202_1" => tcl cmd entered while prompt shows TCL_*.
2. Conformal tcl cmd (which have been tailored for use with Conformal to query the design database. Information retrieved from the design database is referenced by pointers (which are also called object handles in Tcl)). entered by typing "vpxmode" on cmd line. prompt doesn't show TCL_ anymore, so it means it's in vpx mode.

dofile syntax:
------------
1. // comments out rest of the cmd, while /// comments out rest of the line
ex: //read library lib_01.lib \
      lib_02.lib lib03.lib    => here all of the cmd is ignored (1st line as well as second line)
ex: read library lib_01.lib \
    lib_02.lib /// lib03.lib => here only lib03.lib is ignored. Rest of the cmd read lib01 and lib02.

2. Directives: We can enable/disable specified synthesis directives when reading in verilog/vhdl files. All directives by Cadence, synopsys, ambit are enalbled by default
add conformal directives as "infer_latch", "multi_port", "clock_hold", etc in the comment line //.
set directive off synopsys => this disables all synopsys directives found in verilog. We can use "on" to enable directives. If we don't provide vendor name as "synopsys" or other vendor name, then option applies to all vendors.
set directive off => disables all directives
set directive on parallel_case => enables only parallel_case directive since all other directives have been turned off.

Conformal lec run:
----------------
In main dir, we can have startup file .conformal_lec (it can be in installation dir, home dir or current dir) that conformal will execute on startup. The cmd files that are run are called dofiles. dofiles can be used at startup by specifying -dofile option

lec -12.10-s400    -nogui -log ./logs/rtl2gate.log -dofile scripts/rtl2gate.do => without -xl or -gxl, default Conformal L is started.
lec -9.1       -xl -nogui -log ./logs/rtl2gate.log -dofile scripts/rtl2gate.do => -xl needed for running dofile from RC.
lec -13.1-s180 -xl -nogui -log ./logs/rtl2gate.log -tclmode cmd.tcl => here instead of do file, we use tcl file
NOTE: starting from lec -13.1 and upto x.y, all cmds below are not separate (i.e read library), but joined by _ (i.e read_library). However, from 15.x onwards, cmds are back to original way (w/o _).
NOTE: we can start lec in gui mode by omitting -nogui

rtl2gate.do:
----------
0. set system mode setup => optional, as by default it's in setup mode (in later versions, all connected by "_", "set_system_mode setup").
1. Read library: It reads library in liberty format (since liberty files are used during synthesis). It can also read simulation library in verilog format, but only V-1995 format is supported. Simulation libraries should be used for final equiv check, since design verification signoff happens with simulation libraries, and NOT with synthesis libraries.
#-statetable option applies with liberty files, and specifies that library contains state tables.
#-both says read these libraries for both golden and revised. Else put -golden or -revised.
read library -statetable -liberty -pg_pin -both \
        /db/pdk/.../src/MSL270_W_125_2.5_CORE.lib \
        /db/pdk/.../src/MSL270_W_125_2.5_CTS.lib \
        /db/pdk/.../src/edc01024064012_W_125_2.5.lib => if fram or other ip are there

#read library -both -sensitive -Verilog /db/pdkoa/lbc7/2012.05.07/diglib/msl270/verilog/models/*.v => reading simulation library. sensitive means make it case sensitive
#read library -verilog -both lib/*.v fram.v => to read verilog simulation library

#NOTE: sometimes .lib and .v library for cells are not equiv at TI for some of the cells. In such case, first validate that the two libraries are same so that there is mismatch b/w lec and rtl sim resuls. Run these 3 steps to do that in a separate script.
read design -golden -verilog -define TI_functiononly -replace /db/verilog/models/*.v
read design -revised -liberty -append  /db/pdk/.../src/MSL270_W_125_2.5_CORE.lib
validate library => results show all cells and whther they are equiv or not b/w verilog and liberty

2. Read RTL design: Read rtl, elaborate and set top level to digtop
#-lastmod specifies that use the last module found incase of duplicate modules (default is to use the first module found and ignore other duplicate modules)
#-noelab should be used when design contains mixed languages. We want to do elaboration separately, as doing it with the read design cmd might fail to resolve it appr.
#-keep_unreach => unreachable key points are preserved. This is needed when hdl_preserve_unused_registers attribute is true on RTL designs.
#-map or -mapfile option allows designs to be stored in named workdspace isntead of default "work" workspace.
#-rangeconstraint and -configuration only applies to vhdl files. -rangeconstraint causes "dont care" for attributes when a variable is out of range. -configuration causes conformal to link entity/architecture giving configuration higher prority than lastmod.
read design -rangeconstraint -configuration  -vhdl 93 -golden -lastmod -noelab \
                ../../Source/spi_typedefs.vhd \
                ../../Source/spi.vhd   
#read design -verilog top.v -golden => read verilog design separately (-verilog2k for V2001)
#read design -file golden.vc -golden => golden.vc file has all verilog files in it as:
#-y Source/* => all files in this dir read
#golden1.v => other verilog files to be read
NOTE: when using golden.vc, sometimes digtop cannot be reolved as top level module, so we'll need to specify "digtop" and "defines" file separately
#write design => used to wrt out the design in verilog format that was read in. Useful to learn how Conformal parses RTL.
#elaborate design -golden => not needed as design is already elaborated by default.
set root module digtop -golden => here we can set root module to some lower level also, in that case comparison will start from that module. useful during hier comparison.

3. Read Synthesized gate level design, and set top level to digtop (no elaboration needed for gate level design):
read design -verilog -revised -lastmod netlist/digtop.v \
elaborate design -revised
set root module digtop -revised  => here we can set root module to some lower level just as can do it for golden above.

4. reports: rule check report below is a must for any RTL/gate neltist to make sure all of these warnings are OK. When RTL is coded for the first time, running verplex on RTL and reporting rule check is done to find any errors in RTL. These reports are also shown by default.
report rule check -all -verbose -design -golden => this reports all warnings/errors on golden RTL
report rule check -all -verbose -design -revised => this reports all warnings/errors on revised netlist
report design data => to run report of current design info. displays number of design modules, library cells, inputs, outputs, primitives, and one-to-one mapped state points on the Golden and Revised designs
report black box => To see what modules/cells got reported as black box. There should be none

5. set parameters
##add blackbox for IP/Macro. By default, blackboxes are mapped by their module names. To map by instance names instead, use the "SET MAPPING METHOD -nobbox_name_match" command. 3 ways:
A. add notranslate module : run before reading in design or library (before step 1 above). This is used for memories, since here the actual code of module is not parsed, but only dir for I/O ports are parsed and used for blackbox. This saves computer memory from reading and comparing these huge memory modules. ex: add notranslate module -both sshdbw00056025020 => this sram module treated as blackbox for both golden/revised.
B. add black box : run after module has already been read in. Used during hier comp.
C. set undefined cell -black_box : run before reading in design or library. this is useful if module doesn't exist at all. It tells Conformal to treat missing ref as blackbox. First 2 cmds above require code (may be empty code) for module with port declarations (conformal uses input/output dirn), while this one doesn't.
add black box fedc01024064012 -revised => module name and NOT instance name. No hier as it's flat in gate verilog. Ideally we should add black box for both RTL and gate by using option -both.
#add black box /U1/U4 -module -Golden => U4 module inside top level U1 module. If -module not used, then provide instance name

#reports if blackboxes are paired correctly b/w RTL and gate.
report black box -detail => use -detail to get detailed info.

##add net/pin constraint (only needed for scan designs):
add pin constraints 0 scan_en_in -revised => to force scan_en_in to 0 for gate netlist (since scan_en_in not tied to anything in golden RTL). If scan_en_in is not forced to 0, then gate netlist will have all flops which have extra scan_en pin, and when scan_en_in=1, then RTL and gate flop logic will mismatch.
#add pin constraints 0 scan_mode_in -both => This is needed since synthesis adds an extra mux on the output of final scan_out flop to give out diff o/p during scan_mode. However, we should run the tool with this commented out, so that we can be sure that the sdo_out port is the only one that is mismatching. Then we can uncomment this constraint, and run LEC again, which should be clean. Since scan_mode_in is just like any other i/p pin, blindly setting it to 0, may mask real logic problem which might have happened during synthesis.

NOTE: ideally, we should not constrain any pin for scan design. There are 4 possibilities of scan_en and scan_mode combo:
1. scan_en=0, scan_mode=0/1: This is above constrain that we already have. Only mismatch should be SDO out (when se=0, sm=1). For newer designs, tool puts mux at o/p pin SDO whose select pin is tied to se instead of sm. So, for se=0 (sm=x), designs would be lec clean since SDO would be matched for se=0, sm=x. If select pin of mux was tied to sm, then SDO out won't be equiv b/w gate and rtl.

2. scan_en=1, scan_mode=0  : This constrain can be put only for newer designs which don't have scan_en and scan_mode as PI ports. Since Internally scan_en is turned off whenever scan_mode=0 (there's "AND" gate logic), this becomes same as above (se=0,sm=0). Should pass with no mismatches. It's important to check this case as we may have design where during synthesis, we incorrectly set PI pin as se pin (instead of setting o/p of and gate as se pin, where 1 pin is PI and other pin is sm). In that case lec may never catch this bug if we don't check for this constraint (se=1, sm=0). silicon will never work with this bug, as during func run even when sm=0, toggling PI pin will cause se pin of each flop to toggle causing incorrect behaviour.
NOTE: for older designs which have scan_mode and scan_en i/p and o/p pins tied together at top level, we may not be able to run lec to test this case. However, it's already tested, as o/p pin scan_en_out is already tested in case 1 above (se=0, sm=0/1) for LEC as it's PO. So, no way that bug for option 2 can appear for older designs w/o causing LEC mismatch in scenario 1 above.

3. scan_en=1, scan_mode=1  : This will have all flops as non-eq, since flops in rtl do not have se pin, so this case can't be tested.

We should set these 2 cases (1 and 2 above) shown above in separate runs, and make sure they are clean. Just checking for se=0, sm=0 doesn't check all possible logic for scan.

NOTE: In newer designs, we don't have scan_en_in and scan_mode_in as separate PI ports. Since constraints can only be applied to PI pins, we have 2 options to get it to work:
 1. we set root module to the sub-module which has this pin as the i/p pin to that sub-module. Then we can add pin constraint to that pin as it's a PI fo that module. Then we switch back to top level module.
Ex: to set scan_en_in to 0, and make scan_en_out as PO. This makes the design same as older designs where scan_en_in i/p was constrained to 0, while scan_en_out was PO.
 set_root_module  u_dig_test_1 -revised => we provide module name and NOT instance name
 add_pin_constraints 0 test_se -revised => add pin constraint to scan_en pin to that submodule. Do it for other submodules too whose scan_en i/p pin need to be tied to 0 (by setting root module to that submodule).
 set_root_module DIG_TOP -both          => switch back to top level module to do comparison once all sub-module pins have been constrained
 add_primary_output u_DIA_DIG/scan_enable -both => we add "scan_en" o/p of sub-module as PO so that it can be compared. If we do not do this, then "u_DIA_DIG/scan_enable" may become Z(f) pin, which will not be compared. This pin will now show up as PO for key point mapping purpose.

 2. Most of the times, scan_en and scan_mode are o/p pin of submodule (and not i/p pin). They do go as i/p pin to various other sub-modules, but then we have to do add_pin_constraints to i/p pin of each sub-module (at too many places), so, option 1 above may not work efficiently. Instead we can cut o/p ports of such submodules, and make them PI and then constraint them. It achieves the same result as option 1 above, but is much easier.
Ex: same as option 1 where it looks the same as older designs.
 add_primary_input "u_SPT_DIG/auto/Scan_En" -net -cut -both => o/p pin of "auto" module is made as PI. -pin says it's a pin, while -net says it's a net (net is default). -Cut cuts the other original drivers and allow only the newly added primary input as the driver of the net or pin. This is the default. -NOCut does not cut the other original drivers, so new net gets driven by both internal as well as external driver (becomes a wired net). In this ex, "u_SPT_DIG/auto/Scan_En" net becomes a floating net, and all connections to this net, now get driven by PI pin "u_SPT_DIG/auto/Scan_En" which is a new user defined PI.
 add_pin_constraints 0 "u_SPT_DIG/auto/Scan_En"   -both  => as this pin is PI now, we can add constraint now. Note: once a constraint is added on a PI, it doesn't show up as PI anymore, as it's not used for mapping (since it's a constant pin)
 add_primary_output "u_SPT_DIG/auto/Scan_En" -both => we add "scan_en" o/p of sub-module as PO so that it can be compared. This pin will now show up as PO. This pin is defined both as PI and PO. That's OK as bidir ports are also defined that way. Also, it doesn't show up as PI anymore as it's a constrained pin.

add pin equivalences CLK CLK1 -revised => clk in RTL got trnslated as 2 pins CLK and CLK1 in gate. So, CLK1 is declared same as CLK.
add primary input net1 -net -revised => to add net1 inside revised netlist as extra PI to gate netlist
add primary output net2 -revised => to add net2 inside revised netlist as extra PO to gate netlist
add tied signals 0 SO -net -module U1 -revised => to tie floating nets/pins to 0/1
add instance constraints 0 /TOP/U2 => To constrain any internal DFF or DLAT output to Logic-0 or Logic-1
add instance equivalence U1 U2 -Golden => to specify internal equivalence or inverted equivalence between DFFs or D-Latches.
add cut point /U1/net1 -revised => To specify the cut points for breaking combinational feedback loops (conformal automatically cuts the loop when we exit setup mode)

report_pin_constraints -all => This is to verify that only intended constraints are there. Very important to run this.

#set flatten model => this cmd allows you to specify certain conditions for flattening the circuit. Usually applies to revised, when conformal is flattening design, as revised netlist is the one which get these transformations added by synthesis tool. (-latch_fold is the only option that should be needed. all others are optional).
#set flatten model  -map => conformal automatically maps key points when it exits the Setup mode
set flatten model  -gated_clock => when clk gating causes problems during comparison, we use this cmd to remodel latch based clk gaters into mux based feedback ckt to match rtl. adding option "-gated_clock_latch_free" remodels latch free clk gating into mux based feedback ckt. However, for this modeling to be valid, enable signal must be stable while clk is active.
set flatten model -latch_fold => To convert two master/slave D-latches (DLATs) into a single D flip-flop (DFF) gate. NOTE: This important to provide, since most libraries model flops as 2 MS D-latches while in RTL they are FF, so it will give tons of unmapped points when running lec with verilog model library, since DFF in RTL can't map to DLAT in libraries.
set flatten model -latch_transparent => to remodel DLAT (whose clk ports are always enabled) into buffers (transparent latch)
set flatten model -seq_merge => To merge common groups of sequential elements as one sequential element in the clock cone of a DFF or DLAT
set flatten model -all_seq_merge => same as above except that it's for the logic cone. -all_inv_seq_merge is for elements that are inverted.
set flatten model -seq_redundant => To remove seq redundancies, as rst pin of flop anded with Q o/p pin of flop. This redundancy causes non-eq for designs, so important to set this. At TI, verilog models of lib cells have this redundancy (SDB20.v in 33hpa07) in some cells, so we use option "-lib_seq_redundant" to remove redundancy from lib cells. In order to pass lec, we will have to use this option "set flatten model -lib_seq_redundant -seq_transform", run compare (which will show non-eq points), then do "analyze noneq" and agian run compare, which will make designs lec clean.
set flatten model -seq_Constant => To convert a DFF or DLAT to a ZERO/ONE gate if the data port is set to 0/1. adding option "-seq_constant_x_to 0" to -seq_Constant option will optimize flop to constant value 0 when flop is always in "X" state.
set flatten model -loop_as_dlat => To model combinational loop as a DLAT.

#mapping method: needs to be set before exiting setup mode. Default mapping is name-first, with no case sensitivity (use -sensitive to make it case sensitive).
set mapping method -phase => This method maps the key point with an inverted phase. i.e. comapres set logic of gloden to reset logic of revised and vice-versa for inverted-equiv. Phase mapping is recommended when the synthesized netlist has gone through sequential inversion or inverter push.
set mapping method -unreach => To map unreachable points.

#report environment -mapping => reports mapping method used.

set directive off synopsys => turn off synopsys directive

6. set lec mode, compare and report results:
set analyze option -auto
set system mode lec => This starts the mapping b/w golden and revised key points
=> At this point, all key points for golden/revised are reported, and which of these are mapped/not-mapped.
// Warning: Golden and Revised have different numbers of key points: Golden  key points = 5379 Revised key points = 5238
// Mapping key points ... Warning: Golden has 8 unmapped key points
================================================================================
Mapped points: SYSTEM class
--------------------------------------------------------------------------------
Mapped points     PI     PO     DFF    DLAT   Z      BBOX      Total => Z means floating node (TIE-Z)
--------------------------------------------------------------------------------
Golden            205    609    4302   104    7      4         5231 (total golden key points =5231+106+34+8=5379, matches above. see below)
--------------------------------------------------------------------------------
Revised           205    609    4302   104    7      4         5231 (total revised key points =5231+7=5238, matches above. see below)
================================================================================
Unmapped points:
================================================================================
Golden:
--------------------------------------------------------------------------------
Unmapped points   DFF    E         Total => E means Error node (TIE_E)
--------------------------------------------------------------------------------
Unreachable       106    34        140 => unreachable points are OK. Usually spare FF/latch in rtl are unreachable.
Not-mapped        8      0         8   => These Not-mapped points are not OK. These should be 0. Fix these before proceeding
================================================================================
Revised:
--------------------------------------------------------------------------------
Unmapped points   Z         Total
--------------------------------------------------------------------------------
Unreachable       7         7         => unreachable points are OK.
Not-mapped        0         0         => These Not-mapped points are OK. Usually there aren't any of these in revised. These may exist in eco designs, where o/p of some gates may not be used anymore in newer design, but there was no way to delete these.  
================================================================================

For the "Not-mapped" points above, tool will try to remodel and map them, since w/o that mapping, key point mapping is incomplete and LEC can't proceed. Most of the times, tool is able to prove these are unreachable and moves them to "U" category.
 
report unmapped points -summary > reports/summary.rpt => displays list of unmapped poitns. to see if there are any unmapped points. Mapped points can also be reported by using "report mapped points"
report unmapped points -extra => OK to have these
report unmapped points -unreachable => OK to have these
report unmapped points -notmapped => NOT OK to have these in rtl.

add compared points -all => to specify which mapped points conformal compares. default is all. Only mapped points are compared.
compare => starts comparison. shows progression from 0 to 100%.
=> Out of all mapped points, compared points are reported for equiv/non-equiv.
// 5016 compared points added to compare list. => NOTE: not all mapped points (5231) got added to compare list. This is because some mapped points got merged/converted/remodeled to get better matching.
================================================================================
Compared points      PO     DFF    DLAT   BBOX      Total
--------------------------------------------------------------------------------
Equivalent           609    3989   104    2         4704
--------------------------------------------------------------------------------
Abort                0      310    0      2         312 => If there are abort points, tool automatically tries harder until all "compared points" are done comparing.
================================================================================

report compare data -class nonequivalent -class abort -class notcompared >> reports/summary.rpt => view a list of all compare points and their status (equiv or non-equiv). When -class added, then only compare points belonging to that class are shown
#get_compare_points -diff -count => If this is anythnig > 0, then designs aren't equiv. This cmd can be used in tcl mode only.

#optional
#report_unmapped_points -golden  -notype BBox  > unmapped.golden.rpt
#report_unmapped_points -revised -notype BBox  > unmapped.revised.rpt
#reporting floating nets is helpful to find out large current issues.
#report_floating_signals -golden  -all         > floating.golden.rpt
#report_floating_signals -revised -all         > floating.revised.rpt

#below 5 cmds are helpful when we have non-eq points and we want lec to analyze and try again. works only with "-xl" license of lec.
#analyze noneq -verbose
#analyze setup -verbose
#add compare points -all
#compare
#report compare data -class nonequivalent -class abort -class notcompared >> reports/summary.rpt


report verification -verbose >> reports/summary.rpt=> reports a table of all violations as "non std ,modeling options used", "incomplete verifications", "design modifications", "extended checks" and "design ambiguity"
report statistics >> reports/summary.rpt => summarizes mapping and compare stats

exit => to exit tool

-------------
RC Compiler:
-----------
In RC compiler, we write dofile using this cmd:
write_do_lec -revised_design digtop.v -logfile rtl2final.lec.log >  rtl2final.lec.do => tool assumes that design loaded into RC is RTL design. -revised_design specifies gate level netlist. If -golden_design is not specified, RTL design loaded with read_hdl cmd is considered golden and hier comp is done. If -golden_design is specified with gate level netlist, then flat comp done since both are gate level netlists.

NOTE: Even if we don't write out dofile explicitly, RC automatically generates one (after synthesize cmd is run) and puts it in fv/<DIGTOP>/rtl_to_g1.do. This has OVF directives useful for mapping. OVF file is also put in same dir. RC also generates dofile w/o OVF which is in fv/<DIGTOP>/rtl_to_g1_withoutovf.do

Above cmd generates rtl2final.lec.do file. It has following cmds:
0. general setup:
tclmode => prompt shows TCL_*>
vpxmode => prompt doesn't show TCL_ anymore. start entering conformal cmds.

#abort cmd in dofile specifies how to respond to errors. <on | off | exit>
set dofile abort exit => exits the session if any errors in dofile

usage -auto
#log file
set log file logs/rtl2final.lec.log -replace

#oovf file that does transformation for module ports,clk-gaters, etc.
vpx read guide file fv/S1/rtl_to_g1.ovf

set naming rule "_" "" -array_delimiter -golden
set naming rule "%s_reg" -register -golden
set naming rule %L.%s %L[%d].%s %s -instance
set undefined cell black_box -noascend -both => All referenced modules should either be defined or blackboxed. On finding an undefined cell, conformal exits. To prevent it from erroring out, blackbox undefined cells. It's always inserted by write_do_lec cmd. However, avoid this as it can mask a user error.
set undriven signal Z -golden => undriven signal is set to "z" for golden design in auto generated dofile. In RC, undriven signal can be set to 0,1,X,none(default). In LEC, the "none" default setting is translated to "Z" default setting.


1. Read library: It reads library in liberty format (since liberty files are used during synthesis). It can also read simulation library in verilog format, but only V-1995 format is supported.
read library -statetable -liberty -both \ => -ststetable and -liberty are always present in dofile gen by RC
        /db/pdk/lbc7/rev1/diglib/msl270/r3.0.0/synopsys/src/MSL270_W_125_2.5_CORE.lib \
        /db/pdk/lbc7/rev1/diglib/msl270/r3.0.0/synopsys/src/MSL270_W_125_2.5_CTS.lib


2. Read RTL design: Read rtl, elaborate and set top level to digtop. read_hdl in RC gets translated into read desgn
read design -rangeconstraint -configuration  -vhdl 93 -golden -lastmod -noelab \
                ../../Source/spi_typedefs.vhd \
                ../../Source/spi.vhd         
elaborate design -golden
set root module digtop -golden

3. Read Synthesized gate level design, and set top level to digtop :
read design -verilog -revised -lastmod -noelab netlist/digtop.v \
elaborate design -revised => sometimes, elaboration is not needed for gate level design
set root module digtop -revised

4. params/settings
set undefined cell -noascend black_box -both => this cmd always added by RC as it tells to make all undefined cells blackbox. This should NEVER be used in user gen dofile, as it can mask real errors for undefined cells.
report design data
report black box

//analyze and apply OVF Transformations loaded above by reading ovf file
vpx apply guided transformations
vpx report guide information

uniquify -all -nolib
set flatten model -seq_constant -seq_constant_x_to 0 => this added if any of the const 0 or 1 flops, or const latches are optimized in RC.
set flatten model -nodff_to_dlat_zero -nodff_to_dlat_feedback
// set parallel option -threads 4 -license xl
set analyze option -auto

#write out dofile based on all cmds above and then run that dofile. We do hier compare since then it's easy to debug which modules failed.
write hier_compare dofile outputs/hier_rtl2final.lec.do \
        -noexact_pin_match -constraint -usage -replace -run_hier \
        -prepend_string "analyze datapath -module -verbose; usage; analyze datapath -verbose"

#dofile generated above has cmds for comparing each module of design. Only modules that have same input/ouput pins for both revised/golden are considered for hier compariosn. also, modules which have <50 instances in them are skipped for hier comp, since they are easy to be compared at top level. Once hier comp is done for a module and it's equiv, it's blackboxed, and then blackbox model is used when doing hier comp for other module. This dofile has cmds which look like this:
1. comparing one of the sub-modules S1_CLOCK_GATE:
set system mode setup => setup mode
set root module S1_CLOCK_GATE -Golden
set root module S1_CLOCK_GATE -Revised
set module property -instance /I_S1_TIMER/I_S1_CLOCK_GATE -Golden
set module property -instance /I_S1_TIMER/I_S1_CLOCK_GATE -Revised
add pin equivalences XRESET FE_OFN42_xreset_r -hier -Revised => add pin equiv in revised since one pin can have multiple copies.
add ignored inputs I_PIN_YGPIO[5] -Golden => ignore certain i/p pins for golden.
report black box -NOHidden

set system mode lec => lec mode
analyze datapath -module -verbose; usage; analyze datapath -verbose => this string added doing write of dofile.
add compared points -all => adds all PO and state points to compare list
compare -noneq_stop 1 => compare stops on 1st non-equiv point
save hier_compare result
usage

2. blackbox the above module and compare next sub-module. If this sub-module instantiates other sub-modules, then they are blackboxed and compared.
set system mode setup
add black box S1_CONTROL_S2 -module -hier -Golden => blackbox above module
add black box S1_CONTROL_S2 -module -hier -Revised
set root module S1_CONTROL_HUNT -Golden => start with new sub-module
set root module S1_CONTROL_HUNT -Revised
...

3. comapre root module after comparing all sub modules. Root module will blackbox submodules that are directly instantiated in it.
set system mode setup
add black box S1_TIMER_WRAPPER -module -hier -Golden => blackbox the last submodule. All other submodules have been blackboxed.
add black box S1_TIMER_WRAPPER -module -hier -Revised
set root module digtop -Golden => set module to digtop
set root module ditop -Revised
set module property -instance / -Golden
set module property -instance / -Revised
...
save hier_compare result
usage
set system mode setup
report hier_compare result -usage
report hier_compare result -Non_equivalent -usage
report hier_compare result -Abort -usage
report hier_compare result -Uncompared -usage

--------------------
The dofile can be run using any of the 2 cmds below
1. run hier_compare outputs/hier_rtl2final.lec.do
2. dofile outputs/hier_rtl2final.lec.do

#report hier compare results
set system mode lec
tclmode
puts "No of diff points    = [get_compare_points -diff -count]"
if {[get_compare_points -diff -count] > 0} {
    puts "ERROR: Different Key Points detected"
}

vpxmode
exit -force => exits Conformal
--------------------------------

Running LEC for LINT checks:
------------------------
We can run lec for doing LINt checks by running lec in gui mode and doing read design on rtl, and then going to Tools->HDL Rule on gui to see all rule violations

home/kagrawal/> lec -gui -log lec.log -dofile check_rtl.do
NOTE: We can also just run "lec" and then from gui goto "Do Dofile" and select "check_rtl.do" file. This will run the script. Just be sure to do a "reset" before running dofile to load new results.

check_rtl.do has following:
-----------
set directive off synopsys
read library -verilog2k /db/pdkoa/lbc8lv/current/diglib/msl458/PAL/CORE/verilog/*.v /db/pdkoa/lbc8lv/current/diglib/msl458/PAL/CTS/verilog/*.v
set rule handling RTL14 -Ignore -library => ignore certain rules for library cells, as we want to see violations for our RTL code only
set rule handling VLG9.2 -Ignore -library
set rule handling UDP3.2 -Ignore -library
set rule handling DIR6.1 -Ignore -library
set rule handling DIR6.2 -Ignore -library
set rule handling IGN2.1 -Ignore -library
set rule handling IGN3.2 -Ignore -library
set rule handling HRC3.16 -Ignore -library
set rule handling HRC3.10a -Ignore -library

set rule handling RTL1.1 -Ignore -Design -Golden => to ignore certain rules for design (design can be -Golden or -Revised or Both)

read design     ../../rtl/meson_clock_gate.v \        
        ../../rtl/i2c_top.v \
        -Verilog2k -sv -Golden -sensitive -root i2c_top

NOTE: on changing the RTL, we can rerun the design from within gui window by running this on cmd line:
SETUP > reset
SETUP > dofile check_rtl.do

---------------------------------------

Spyglass

SPYGLASS is a synopsys tool which does RTL/GATE linting, CDC and RDC checks. Mostly used on RTL, since there are many other tools for checking GATE level netlist for correctness. The GUI version of spyglass is called "spyglass Explorer". The below section talks about original SPYGLASS tool. As of 2022, Spyglass is integrated under "VC platform" and repackaged as "VC Spyglass" or "VC Static Spyglass" or just "VC Static". VC Spyglass has slightly different syntax, so we'll cover that in a separate section. Below section may not apply anymore, as that original Spyglass is deprecated. But in case you are still running it, you may follow the details below. Else move to "VC Spyglass" section.

SPYGLASS is a PERL source file,that has rule defn and PERL subroutines, if any. Each rule has a number of attributes that decide how the rule will function. some imp attr are rule_name, msg, rule_primitives, etc. Rule primitives are C functions that are present either in SpyGlass core or in a shared library. They perform the real work of checking rules against your design and can be parameterized to produce different checks.


Running Spyglass (SG) standalone:

SPYGLASS is usually installed in a path like this: /project/tools/synopsys/spyglass/N2018.12-SP2-3-T-20191002/SPYGLASS_HOME/ => We'll refer to this as "$SPYGLASS_HOME"

spyglass invoked by typing "spyglass" or "spyglass -gui". The shell version (no gui) brings up "sg_shell" where we can enter spyglass cmds. "gui_start" on sgshell also brings up the gui. Tcl 8.7 is supported in sg_shell.

spyglass -project digtop.prj => This runs spyglass with project file provided. project file is just bunch of cmds for reading all i/p files, setting options and running steps. Otherwise we can start w/o .prj file, and enter everything manually using cmdline or gui.

spyglass -tcl input.tcl -shell => other way of launching SG. We provide all SG cmds as well as tcl cmds in input.tcl. digtop.prj file can be called from within input.tcl via cmd "new_project -force digtop.prj". -shell causes SG shell to be invoked as opposed to gui.

Inputs/Outputs to SG: SG shell is used to tke input cmds and generate output reports. Console SF is used to navigate RTL, view waveforms, etc. In VC Spyglass, Verdi is used isntead of Console SF, so it's lot easier as same Verdi interface is present when traversing design.

  • Inputs: SG takes input as RTL/netlist and stdcell .lib files (if netlist provided, or if RTL contains stdcells instantiated)
    • Project file => .prj file which contains path to design RTL/netlist, stdcell .lib files.
    • Constraints => .sgdc or .sdc file. These are std SDC constraints for clks, IO ports, false_paths, etc. SGDC constraints are SDC constraints, but translated to Spyglass format.
    • Waiver files => .swl or .awl files. These are waivers that apply to Errors/warnings (ones that we want to waive)
  • Outputs: SG provides output reports.
    • Reports => .rpt files which list Lint/CDC/RDC violations.

 
When Spyglass run on design, these steps are run as follows:

  1. Analyze_design => syntax warning/error, basic synthesis warn/error displayed.
  2. Elaborate_design => elab warn shown.
  3. RTL_Rule_checking => all rtl rule checks done (NOT the rules associated with goals).
  4. Synthesize_Design => advanced synthesis done, and warn/error shown.
  5. Structural_read => structural read of design is done, and here rules of goals are run and results displayed. If no goals specified, then no rules checked.

LINT checks:

  1. Checks for basic connectivity issues, sim issues, synth issues, and also for recommended design practice.
  2. Also does functional analysis to identify issues with RTL


CDC/RDC checks:

  1. ensure all flops have clk,
  2. ensure clk and reset tree are free of glitches/races (sgdc file should have at the least clock and resets defined, since sdc file do not have reset defn in them, we need to put reset defn in sgdc)
  3. check all aspects of CDC = metastability, coherency problem on reconvergent crossings, sync resets, etc

 


 

SG Commad file => The project file (digtop.prj) has all cmds. This file is divided in 3 sections => design setup, goal setup and Analyze results.

1. design setup: => all design files (verilog, vhdl etc), constraints file (sdc or sgdc), waiver files (awl), tech/hdl libs (liberty files) read here (This is "Design Setup" icon on gui)

#read i/p files. These can be added using gui "design_setup->add_files" menu. Then use "design_setup->Read Design" menu to read all these files added.
read_file -type sourcelist /db/...Source/digtop_rtl.f => read RTL files using sourcelist
read_file -type verilog /db/design/../sram.v => read verilog file for memory element used in RTL. Usually not needed as model of IP not required for spyglass to do it's checks. It can be treated as a blackbox.
read_file -type gateslib /db/lib/.../STDCELL.lib => read gate liberty files (usually needed for netlists). We can provide .lib or .v files for gates, as both of them have the functionality defined. For Hard IP, we need these lib files too, unless we want them to be treated as blackbox.

#read_file -type sglib /db/lib/.../STDCELL.sglib => this is proprietery SG lib files for gates. It's binary, so can only be read by SG tool. .lib files above can be converted to sglib for faster run times on future runs.
read_file -type sgdc /home/.../spyglass/common_project_constraints.sgdc => read sgdc (spyglass design constraints), similar to sdc with some variations, described later

read_file -type sgdc /home/.../spyglass/digtop.sgdc => constraints file specific to design. For Hard IP, we can either provide .lib file or .sgdc file to include them in CDC/RDC analysis. See later on how to generate abstract sgdc files for IP from within spyglass.
read_file -type awl /home/.../spyglass/common_waivers.awl => awl file is the one generated by tool that should be used for waivers
read_file -type waiver /home/.../spyglass/waivers.swl => read waivers from swl if present

#set options for sdc (optional) => this section is needed if you have sdc file that need to be converted to sgdc (read sdc file via cmds shown later)
set_option sdc2sgdc yes; => To enable translation of SDC to SGDC (SGDC=spyglass design constraints file, which has a different syntax than SDC file).
set_option sdc_generate_cfp yes; => To enable generation of cdc_false_path commands.
set_option support_sdc_style_escaped_name yes; => To allow non-escaped names used in SDC.
set_option sdc2sgdcfile ./output/digtop.sdc2sgdc.out; => To specify name of the translated SGDC file.
set_parameter sdc_domain_mode  sta_compliant; => This is default value. Mentioning here to capture the recommended values for the parameter.
set_parameter sdc_generated_clocks yes; => To have the generated clock definitions translated to clock constraint.
set_parameter enable_generated_clocks yes; => To have the generated clock definitions translated in uncommented format.

constraints:

constraints file = usually constarints file provided in sgdc format. Depending on goal, we may need less/more constraints. Usually for lint, no constraints needed. For CDC/RDC, we need clock and reset defined for I/O ports. We may also define clocks for other I/O ports (i.e clk driving the i/p port, or clk capturing the o/p port), so that the tool knows if a synchronizer is needed for these I/O ports).

Instead of writing constraints in sgdc format, which may be painful, we can reuse sdc constraints file from synthesis or sta runs. If we have sdc file, we can put the cmd below in sgdc file, and it will generate a new sgdc file to be used based on constraints from sdc file.

sample constraints file in sdc:
current_design digtop
sdc_data -file digtop.sdc => read sdc constraints file natively. sgdc file generated for use by spyglass. sdc file doesn't have any reset related info, so we need to provide that and any additional info by manually adding that to sgdc file generated. This sgdc file is generated in file specified via "set sdc2sgdcfile" option above. This generated sgdc file is the one that is used for SG runs.

sdc->sgdc translations (these translations are done internally by SG and translated cmds are put in sgdc file)

  1. create_clock -> clock (no more documentation for clock in spyglass online help manual, it says it's not supported in tcl shell, so very limited documentation). For every create_clock cmd, clock cmd with diff domain as "d0", "d1", etc created. However, all such clocks are still treated as sync, just as in sdc cmds. If duty cycle not specified, then it's assumed to be 50%.
  2. set_input_delay -> input
  3. set_output_delay -> output
  4. set_false_path -> false_path/cdc_false_path.
    1. ex: cdc_false_path -from clk1 -from_type clock -to clk2 -to_type clock,
    2. ex: false_path -from clk1 -to clk2 -type sfp => -type specs where this false path got translated from. here -type sfp implies it got translated from set_false_path sdc cmd.
  5. set_clock_groups -> false_path/cdc_false_path => This cmd specs async behaviour among diff clocks. Very imp to specify this. If no false path b/w clks or clock groups specified, then all clks defined via "clock" considered synchronous. In such a case, CDC runs have no meaning, since there will be no CDC violation (as all clks are considered synchronous). Here false path specified b/w clk groups defined as async or logically/physically exclusive.
    1. ex: set_clock_groups -asynchronous -group clk3 => false_path -from clk1 clk2 -to clk3 -type scg_asynchronous =>  here -type scg_synchronous specs that this false_path cam from sdc cmd "set_clock_groups -asynchronous"

generated sgdc file: (has sgdc cmds in it). This can be auto generated from sdc file above, or can be manually modified.

top.sgdc file:

#top module
current_design digtop => top level module specified

#clk
clock -name "digtop.clkosc" -domain domain3  -edge {0 20} -period 40 => specify all clk pins using this cmd, so that sypglass can analyze clock paths. Here, we specified port name for clk, but if we want to give optional tag name for this clock, it can be done via option "-tag CLK1" (so this clk, clkosc, will now have a tag "CLK1" that can be used instead of "digtop.clkosc" => similar to sdc option "-name CLK1"). NOTE: names can be ports or hier net/pin names (hier names are rep by using . as a separator). Both tag and name specified in sgdc cmd, when name specified in create_clock.

clock -tag "top.mod1.VCLK_1" -domain "top.mod1.clk_domain1" => virtual clk specified (since no -name used, and -tag used). clk can be virtual clock also (instead of a port). Virtual clk can be defined as having a clk waveform. However, since there is no port associated with virtual clk, we have to use "-tag <clk_name>" to give the virtual clk a unique name, with which it will be identified. This clk is identified as virtual clk by the absence of a port name. It is by default considered asynchronous to all other clocks. Even if we don't define a virtual clk, any undefined clk is considered virtual clk.

#reset
reset -name "digtop.n_puc" -value 0 => reset pin with active value=0 (active low), specify all reset pin, so that sypglass can analyze reset paths. reset is assumed to be async reset, unless option "-sync" used, which makes it sync reset.

#for all other i/o ports, we use "input/output" or "abstract_port" cmds. SG reccomends that "abstract_port" cmd should be used instead of input/output cmds. We specify constraints for i/p ports only (i.e driver clock), o/p port constraints are generated automatically by the tool (only for abstract model, explained later), and do not need to be provided. If the driver clock defn is not found (i.e it hasn't been defined via "clock" cmd), then it is assumed to be a virtual clk and hence async to all other clocks.

#input -name "top_1.DMUX" -clock vclk => This specifies that i/p port DMUX in top_1 module is driven by clock named "vclk". Note, vclk is name of clk and not tag of clk (in cmd "clock -name vclk"). clock can be virtual clk too, in which case we specify the tag (clock -tag vclk).
#output -name "out_1" -clock clk_port0 => this specifies o/p ports, with destination clock as "clk_port0" (i.e o/p port finally goes into seq element being driven by clk_port0). clock can be virtual clock too. Usually, "output" cmds not needed. Reason is no matter what is the destination clk for o/p port (sync or async), we always put synchronizers on i/p side of any block. So providing this info serves no purpose, as even if we know the destination clk is async, we don't put any synchronizers on o/p ports of our block.

abstract_port -ports IN1 -scope cdc -combo no -clock clk_1 => i/p port being driven by clk_1.

#abstract_port -ports OUT1 -scope cdc -combo no -clock clk_2=> o/p port being driven by clk_2. NOTE: this is diff than "output" cmd, where name of clk specified destination clk, while "abstract_port" specifies source or origin clk. There is no way to specify destination clk using this cmd (may be ok?? FIXME). However, we never provide o/p port constraints, so no need to  bother about these.

###other constraints

#synchronizers

sync_cell -name synchronizer_2ff => synchronizer cells are used in any design with clk crossing. Either these are provided as lib cells, or are just put directly in design by having multiple flops back to back. This cmd specifies valid synchronizer cells for control crossings. We can specify multiple such cells with optional from/to freq or clk, and tool will make sure that all crossings have one of these lib cells across them. If any other cells besides the list here or "manually inserted back to back flops" are used in design or src/dest freq/clk condition not satisfied, then tool will flag it. NOTE: these sync cells are valid only for ctl crossing, since data crossings do not have sync cells.

sync_cell -name SDF_SYNC_CELL -reset => specify clock domain crossing on reset path be considered as synchronized (i.e reset signal coming in should be sync to dest clk. If it's not, then it should be flagged as CDC ERROR, since inbuilt synchronizer doesn't have any logic to sync reset signal and it's behaviour assumes that reset coming in is sync to dest clk). We specify more such sync_cells for vaious sizes.


reset_synchronizer -name dig_top.q1[5]  -clock LFO_CLK  -reset dig_top.q1[5] -value 0 => used to specify a reset synchronizer signal along with its asserted reset value. -name specs name of sync o/p, -reset is the name of src reset for which "-name <name>" is the synchronizer. -clk is clk of synchronizer. -value 0 specs that 0 is the active assertion value of reset sync (this value is used by SG for de assertion verification purpose)

num_flops -default 2 => specs that min 2 flops shuld be used for multi flop sync for all clk crossings for which num_flops is not specified.

#false path
cdc_false_path -from "I2C_SDA_IN" -to "I2C_SCL_IN"
cdc_false_path -from "I2C_SCL_IN" -to "I2C_SDA_IN"
set_case_analysis -name "digtop.SCANMODE" -value 0 => to run in func mode only. net/pins can be tied to a certain value depending on mode, so that analysis is not done with those toggling. Since scanmode changes clks, CDC analysis can become very noisy, just like STA runs

quasi_static "top.netA[*]" => this is used for signals which are quasi static, i.e they change once in beginning but then assume a static value of 0 or 1. CDC skips verification of such paths, which is what we would want since such signals don't need synchronizers, etc (i.e signals b/w scan and functional, since we control how many cycles to wait to allow values to propagate correctly, so no need to check for synchronizers b/w these signals). wildcards * and ? allowed here. However, when using wildcards, double quotes needed.

cdc_attribute -unrelated "digtop.en_sync[0]"  "digtop.en_sync[1]" "digtop.en_sync[2]" => states that these are unrelated signals

reset_filter_path ..., cdc_filter_coherency => These are used to ignore certain objects in analysis

#below are needed if we do dft related checks. Not needed if we do not want to do dft checks

clock -name "digtop.dft_scan_clk" -domain CLK03  -value rtz -testclock -atspeed => define test/dft clk.

testmode -name dft_mod2.tds_en -value 0 -scanshift => This forces value to 0 in testmode for tds_en signal. -scanshift implies it's only during scan shift and not during entire test mode
noscan -name "n_clkgater_dft.*func_pulse*"

sample waiver file:

waiver file = *.awl => It's an SGDC format file that contains waive constraints
--
waive -rule W415a -msg {signal assigned multiple times :[Hier: ace_dig:i2c*_inst..]} -exact -comment "i2c waived" => these waivers generated by selecting msg to waive and right click "Waive msg to->waiver file". -msg will waive messages which match exactly the same msg content as in {}, so only that particular logic with that rule will be waived. If -msg wasn't there, then all logic matching that rule "W415a" would be waived which would be incorrect. -exact will match */?/etc in -msg {} exactly and not treat them as perl wildcard match (so i2c* matches i2c* in msg, otherwise it would match i2c0, i2c1 etc)

waive -du "WORK.i2c_fsm" -rule "Ar_asyncdeassert01" -msg {Reset signal 'bellatrix_digtop.sync_SOFT_RESET.sync2_q' for 'set' pin of flop 'bellatrix_digtop.addrValid is async} => design unit specified here

----

#Now after finishing constraints and waiver file, we can set some other optional options
#set options general (optional)
set_option top digtop => mandatory if -top is not provided while running "goal setup". spyglass doesn't figure out top level by itself, and will error out.
set_option projectwdir /home/.../results => set project working dir
set_option active_methodology $SPYGLASS_HOME/GuideWare2.0/block/rtl_handoff => we defined active methodology here. Optional, as we use cmd "current_methodology" later.
set_parameter synchronize_cells SYNC2SDFFCQ_F4_DH_85LL => specifies that any synchronizer must use this cell as synchronizer.

methodology: We define active/current methodology for the run in SG using "set_option active_methodology" or "current_methodology". This decides what all set of goals are going to be run. 3 methodology for block are defined by default: initial_rtl, rtl_handoff, netlist_handoff. rtl_handoff is the most common methodology that we use on RTL.

We'll see corresponding dir for each methodology here in $SPYGLASS_HOME/GuideWare2.0/block/rtl_handoff. Within each dir, we'll see subdir for each goal category as cdc, lit, rdc, dft, adv_lint, power, etc. , Within each goal category dir, we have further subdir for final goal as cdc_setup.spq, lint_rtl.spq, etc. These goals are referred as "cdc/cdc_setup" including full dir path. Dir structure can be anything, just the full path for each goal needs to be provided for the goal to be recognized. These final goal files are in internal spq format. Goal is basically a collection of rules. They have syntax as:

=template++++++ //template section that prints whatever you want to show to user as documentation for that goal. It displays on RHS of gui window

CHECK for RTL: This checks for 1. ... etc //This displays on gui window as description

=cut++++++++

-policy=clock-reset //setup cmd to register policy

-enable_mux_sync=all //setup cmd to enable specific parameter

-rules Clock_check10 //This rule is added to this goal, similarly 100's of rules added for each goal

-overloadrules Ac_clockperiod01+severity=Error => changes severity of rule "Ac_clockperiod01" to Error, other default severity for that rule applies

Instead of 3 default methodology, we can define our own custom methodology also. There we can have our own custom goal, each of which can have whatever rules we want to be checked. ex:

current_methodology /home/custom_meth/lint_cdc => methodology is now custom "lint_cdc". Within this dir, we can have similar files as above for custom goals, ex: custom_rtl_lint.spq. then in gui, under goals, it will show as "lint_cdc/custom_rtl_lint" and so on for other goals


2. goal setup: adds setup info for goals, sgdc files, reports. These can be added using gui "goal setup" menu, and selecting required goals.

Main goal categories for rtl_handoff methodology are => lint, adv_lint, constraints, cdc, rdc, power, physical, rtl2netlist and connectivity_verify. Within each goal category are various goals (i.e cdc/cdc_verify).

Within each goal (i.e cdc/cdc_setup_check) are various rules (Ac_report01, etc), which can be enabled/disabled as needed (on gui, right clicking on rules brings up edit window).
#goals to run => we specify separate line for each current_goal, they can also be combined in one using other options in "current_goal" cmd.
> current_methodology $SPYGLASS_HOME/GuideWare2.0/block/rtl_handoff => scope of each goal is confined within scope of current methodology. methodology decides which rules will be run for particular goal, since some rules may only be appr for RTL while some for gate.
> current_goal Design_Read -top digtop => goal = read design and show basic errors. Each goal has a set of rules that's checked against.
> current_goal lint/lint_rtl => lint goal category, etc
> current_goal cdc/cdc_setup_check -top digtop => cdc goal category. specify goal scope. -top is optional, as top is picked from options
> read_file -type awl cdc_waiver.waiver => if no "read_file" specified for each goal separately, then all files specified above are read for all goals. We specify sepaate files, when we need diff waiver files for diff checks
set_goal_option default_waiver_file /home/.../spyglass/common_waivers.awl => picks up default waiver file specified above, on top of cdc waiver file
> current_goal cdc/cdc_verify_struct => another cdc goal

Custom goals: We can define our own custom goals, on top of std goals provided.

define_goal CUSTOM_GOAL_1 -policy { lint } { => this defines custom goal "CUSTOM_GOAL_1" which starts appearing next to all std goals
set_parameter abc def
}


3. analyze results: goals run, and results shown. In gui, click on "Analyze_results" and then "Run goal". It will run all selected goals (i.e cdc/cdc_setup, etc). Results are displayed based on which goal is chosen on top for display. We have to click on each goal, one by one, to see messages for all goals. Bottom half of gui is where it shows shell, violations and waiver tree tab. Click on "violations" tab, and look for "Group By" in the top part of this bottom gui. Select "Goal by severity" to see them arranged by severity (Fatal, error, warning, info). We can group them any way we want by choosing appr option
> run_goal => runs all the goals chosen above. You will see a lot of rules being checked. Creates default dir for results (eg Group_Run/lint_rtl/spyglass_reports/*.rpt, *.log)
> write_report summary > summary.rpt => write_report cmd is optional as reports are written by default. This is needed if want our own non-default file name, paths, etc
> write_report moresimple > simple_summary.rpt

Incremental Mode analysis: A usefule mode if we want to see incremental changes in design compared to previous design (i.e what are the new errors/warnings showing up compared to the older design). This can be selected by choosing "incremental mode" on top of gui.

Scenarios: A scenario is a goal that contains modified settings of a goal. You can create multiple scenarios for a goal where each scenario represents different settings for that goal. For example, you create the scenario, Scenario1, for the connectivity goal in which you change values of some parameters. Similarly, you can create another scenario, Scenario2, for the same connectivity goal in which you can specify certain files, such as VCD or SGDC files. You can run these scenarios like any other goal. The advantage of using scenarios is that you can save different settings (in the form of scenarios) made for a particular goal.


Debug procedure on gui:
A. clicking on Design icon on top, shows all details of design => modules, blackbox, flops, etc
B. clicking on nand gate (or triangle yellow warning icon) in messages will bring up rtl code with corresponding violation on above RTL screen. We can get rtl on any editor by "right click" and then "open editor"
C. waiver file shoukld be enabled for each goal (by clicking waiver, selecting the waiver file, right click->enable_file), else it may not get picked up.
D. reports are by default in digtop/ace_dig_top/lint/lint_turbo_rtl/spyglass_reports/spyglass_violations.rpt, similarly for others.
E. If you see a error or warning msg, and want to look into the schematic for where that error is happening, then click on particular error/warning in bottom window. Now click on + sign to expand it (since errors of particular nature are grouped together). Once you are down to the bottom most where you see only that error, right click, and select "incremental schematic". This will bring up a schematic showing exactly where the issue is. It hides all unrelated logic, so it's very useful to debug this way.


In order for design to be clean, messages should have no error/warning for each goal. We have to go thru each goal, and look at messages. So, there has to be separate waiver file for each goal, as cdc waiver file will be very different than lint waiver file (as cdc messages are lot diff than lint messages)

abstraction:

Until now, we ran CDC/RDC flat on the whole design, i.e we specify all rtl files down to the stdcells. For large designs, we may not want to run CDC/RDC at top SOC level for all of RTL. It's more convenient to run CDC/RDC at lower block level, get it clean, and then generate an abstract model of these blocks in a sgdc file, and then use that abstarct model in higher levels, and finally at the top chip level, so that only logic connecting these blocks will need to be verified for CDC/RDC (internal guts of blocks have already been verified).The same approach is also used for blackbox or IP within a block, so that we only provide an abstract model for these IP (instead of providing full blown RTL models), and use those to run CDC/RDC analysis. This abstract view is a set of SpyGlass design constraints describing the behavior of block ports. This is helpful, since now the full CDC/RDC behaviour of these blackbox is captured via I/O port properties only, and analysis can be done faster.

We don't have to specify any separate cmd for generating abstract sgdc files for any IP. We run SG normally on this IP or lower level block, providing sgdc constraints, waivers, etc. Presence of one of the rules in any of the checks generates abstract constraints. This rule is "Ac_abstract01". This rule is "ON" by default for lint,cdc goals or can be enabled in the gui. This generates sgdc constraints file for block abstraction ($projectdir/<block-name>/cdc_abstract/cdc_abstract/spyglass_reports/abstract_view/cdc/<block_name>.cdc_abstract.sgdc), when SG is finished running CDC/RDC for this block. This abstract sgdc file contains all the info that is needed by SG to perform CDC/RDC analysis. We have to help SG in generating this file, by providing appr constraints on i/p ports which we expect in order for CDC/RDC analysis on these i/p ports to happen correctly. Now, when importing this sgdc file at higher levels, additional rules "Ac_abstrat_validation01, 02" validate the generated sgdc file constraints for correctness at higher levels (i.e checks that whatever is specified at abstract level is valid at higher level of hier). However validation is performed on i/p ports only (and not on o/p ports), when block is instantiated at SoC level. Constraints checked for are constraints specified using clock, reset, abstract_port, quasi_static, set_case_anaysis, num_flops, etc or whatever is specified in sgdc file for this block.

Generated abstract file has these sgdc cmds (NOTE: asbtract_port cmds for both i/p and o/p ports are lot more complex here, as they try to capture all internal functionality of that block):

abstract_port: sgdc cmd "abstract_port" is used on all I/O ports to capture behaviour of the ports. Other sgdc cmds as clock, current_design, set_case_analysis, etc also used to generate the full IP level sgdc file:

#abstract_port => This cmd used for all I/O ports. SG also validates constraints put using this cmd. If it finds inconsistency b/w what it sees on this cmd, vs what it sees in design, it will report an error. options:

-ports <port_name> => name of i/o port. Multiple port names may be specified in single cmd, by separating port names via space

-clock <clk_name> => specifies clock for that port. For both i/p and o/p port, it specifies driving clk (i.e clk of origin flop which drives the i/p or o/p port).

-reset <rst_name> => specifes reset name assigned to port (if port is used as reset pin for block).

-combo <yes|no|unknown> => speciifes if there is combo logic present on i/p or o/p port. default is unknown, which means that reset validation checks should not be performed

-sync <active|inactive> -from <src_clk> -to <dest_clk> -sync_names <net_pin_name_of_sync> => this specs synchronizer properties on ports. active/inactive specs if port is driven by ctl sync or data sync (active=> port is driven by ctl sync that can act as sync en for other data crossings. inactive=> port is driven by data sync that cannot act as sync en for other data crossings). ctl sync means 1 signal passing thru simple synchronizer made up of 2 or more sync flops, while data sync means there is no sync flop b/w 2 clk domains, but instead is a mux whose mux select signal is synchronized, and this ctl sync signal synchronizes the data signals. -from/-to are used only for o/p ports to specify clk reaching to src/dest of synchronizer. -sync_names are used only for o/p ports to specify net/pin names of synchronizer i/p pin (for ctl sync, it's net which is crossing from 1 domain to other domain, while for data sync, it's the select signal of mux before being synchronized)

-related_ports <related_ports> => This is used for ports which do not have synchronizer. Such ports have seq paths to other i/p or o/p ports (just a flop in b/w i/p and o/p ports). Usually valid for o/p ports where related ports are i/p ports

-path_logic <combo|buf|inv> => specs logic from i/p port to inst_pin, or from inst_pin to o/p port, or from i/p port to o/p port

-scope speciifes for what SG product we want to apply this stmt (one of dft, cdc, constraint or base, which are the 4 products offered by SG). 

NOTE: many more options available for this cmd, to enable SG to be able to perform analysis w/o knowing guts of design.

ex: abstract_port -ports {port_in[3]} -scope cdc -combo no -clock VIRTUAL_CLK_1 => This specs that i/p port port_in[3] of this IP is driven by virtual clk named "VIRTUAL_CLK_1". Since virtual clk are usually in their own clk domain, they are async to all other clks. "-combo no" says that there should be no combo logic on this i/p pin path. If at higher level, a combo logic is found, then CDC is denote it as an error. NOTE: i/p port constraints are simple, just specifying the driving clk.

ex: abstract_port -ports out[0] -scope cdc -clock "clk1" -from "VIRTUAL_CLK_1" -to "clk1" -sync active -sync_names "block1.int[0]" => this specs that o/p port out[0] is driven by clk "clk1", and has a synchronizer before this flop which is synchronizing from "virtual_clk" to clk1. The name "VIRTUAL_CLK_1" (defined using "clock -tag VIRTUAL_CLK_1" somewhere else) implies that this clk is vitual and hence async to all other clks. The synchronizer i/p pin is block1.int[0], and it's a ctl sync. NOTE: o/p port constraints are complex, specifying sync etc, but are generated by tool, so not an issue for us.

ex: abstract_port -ports out[2:0] -scope cdc -clock "clk1" -combo yes -related_ports in1[3:0] in2_wrt in3[4] => This specs that o/p ports are driven by "clk1", have combo logic after being driven out of flop, have no sync before the flop, but instead have regular flops and assciated logic b/w them, which finally lead to these i/p ports (in1[3:0] etc). This applies to all o/p ports out[2], out[1] and out[0]. This kind of spec for o/p port is very common in regular designs (as they usually have series of flops from i/p ports to o/p ports all on same clk domain)

Once an abstract sgdc file is generated for the block automatically by SG, we can import that abstract file using this cmd when running SG at higher level:

sgdc -import mod_2flop_synchronizer /.../spyglass_reports/abstract_view/mod_2flop_synchronizer_cdc_abstract.sgdc => here we import auto generated sgdc file for synchronizer IP in block level run. We generate similar sgdc file for block level, which we then import at SoC level. When writing sgdc constraints file for block level, we write constraints for i/p ports using "abstract_port" cmd, and do not write constraints for o/p port. We do this to tell the tool, from which clk domain we expect i/p ports to be driven by. The tool generates block level abstract file, and includes our hand written i/p port constraints, but uses the logic inside the block to derive o/p port constraints. That finaly generates a complete sgdc file with constraints for both i/p and o/p ports. i/p port constraints are usually very simple, as we just need to say from which clk domain we expect the i/p ports to be driven by. The tool does rest of the work at chip level to verify that assumption for i/p ports. It does not validate anything at o/p port (probably because o/p ports eventually enter as i/p ports in other blocks)

Ex of ip abstraction sgdc file: These abstract block's sgdc files are generated automatically by SG when we specify any rules as "Ac_abstract01", etc (We still need to run SG on this block and provide sdc file with basic clock, reset, input, etc defn in it).

ex: async_load_8_1_0_1.sgdc => (for a synchronizer with ctl and data signals). (i/p port = clk_1, reset_n, asyncData[7:0]. asyncCtl. o/p port=DataOut[7:0], CtlOut)

abstract_file -version 5.1.0 -scope cdc
current_design "async_load" -param { DATA_WIDTH=8 RESET_VAL=1 } => specifies parameters of rtl file for which this sgdc file is valid for this synchronizer. These parameter values are used in the name of sgdc file above (i.e _8_1_0_1*) to uniquely specify sgdc files for different parameters, since they may have diff constraints applicable to them.

clock -tag "VIRTUAL_CLK_1" -domain "domain_1"

clock -name clk_1 -domain d0 => NOTE: clk_1 is a port name, and not a tag name
abstract_port -ports reset_n -scope cdc -clock clk_1 => i/p port reset_n driven by clk_1. Same constraints for other set/reset pins, as all are assumed to be already synchronized before getting to this block
abstract_port -ports asyncData -scope cdc -clock VIRTUAL_CLK_1 => i/p ports asyncData[7:0] driven by async clk "VIRTUAL_CLK_1". Note: explicit [7:0] not needed.

abstract_port -ports asyncCtl -combo no -scope cdc -clock VIRTUAL_CLK_1 => i/p ports asyncCtl driven by same async clk "VIRTUAL_CLK_1". Here"-combo no" specs that there can be no combo logic on this path
abstract_port -ports DataOut[7:0] -scope cdc -clock "clk_1" -from "VIRTUAL_CLK_1" -to "clk_1" -sync inactive -sync_names "async_load.asyncCtl" => data o/p port spec as inactive since no synchronizer on data ports. sync_names assigned to i/p Ctl port.
abstract_port -ports Ctlout           -scope cdc -clock "clk_1" -from "VIRTUAL_CLK_1" -to "clk_1" -sync active    -sync_names "async_load.asyncCtl" => Ctl o/p port same as Data o/p port except that it's spec as active, since there is active synchronizer on Ctl port. sync_names is same as above
abstract_block_violation -name SGDCWRN_1 -sev WARNING -count 26 -is_builtin => This is generated by SG for it's internal use. It specs that during abstract block generation for this block, 26 violations of name "SGDCWRN_1" which is inbuilt warnining are generated, whose severity is "Warning".


CDC/RDC violations:

Important violations that should be fixed in design: FIXME = add violations

1.