*******************************************
For Running Place n Route in Synopsys ICC (IC compiler):
---------------------------------------------------------------------

Synopsys ICC uses Milkyway(MW) ref lib, MW tech/RC Model files (TLU+) for physical data to do PnR.
Note, Cadence Encounter uses .lef file for physical data.

For logical data (during synthesis, timing, etc), synopsys and cadence both use .lib or .db timing files.
For running DC in topo mode, we need the MW lib, so that DC can estimate cell placement, and we need TLU+ files so that DC can calc wire delays from tech file instead of based on wire load models.

MW ref lib structure: It's a unix dir, and has binary files in it. It has 3 views for any ref lib (i.e stdcells)
-----
1. CEL view: It's in subdir CEL, and contains actual layout data for cell. It's not used by router. It's used for signoff extraction and signoff DRC/LVS checks. We don't really need this view as extraction file (eg *.spef) have only routing extraction info (R,C of nets), and no extraction from physical lib cell. Timing data in .lib file for all these std cells is used for delay calc to do timing analysis. It might be useful for DRC/LVS checks, assuming .lef file had some incorrect blkg, etc.

2. FRAM view: It's frame view (similar to lef file). It has pin, blkg, via, dimension, symmetry, etc which is used by PnR tool.

3. LM view (optional): It's logical model view and has all timing info. These are same as .lib/.db files used for timing during synthesis. These are specified using "target_library" and "link_library" as during synthesis.

:1, :2 etc denote the version number for that particular cell.

To create MW ref lib, we need to create one using MilkyWay tool:
/apps/synopsys/milkyway/2010.03/bin/AMD.64/Milkyway => brings up a GUI
In the gui, goto: cell_library->lef_in (appears on 2nd row). opens a read_lef box. Specify "MW lib name", where we want to store all stdcells (ie pml48MwRefLibs), specify tech lef file (/db/.../*tech_6layer.lef), and stdcell lef file (/db/.../*core_2pin.lef). click ok. This converts all cells in LEF file to their equiv FRAM view and adds them as subdir in MW lib dir name specified above.

---
Instead of working from gui, we can use cmd line i/f as follows (after opeing mw gui):
;# step 1 create a milkyway library from the tech file
cmCreateLib
setFormField "Create Library" "Library Name" "pml48_ref_libs/CORE" => since the dir to be created by MW is CORE, we need to have dir pml48_ref_libs already existing, or else mw will fail.
setFormField "Create Library" "Technology File Name" "../gs40.6lm.tf"
setFormField "Create Library" "Set Case Sensitive" "1"
formOK "Create Library"

;# step 2 read the lef into CEL view and model it into FRAM view
read_lef
setFormField "Read LEF" "Library Name" "pml48_ref_libs/CORE"
setFormField "Read LEF" "Cell LEF Files" "/db/pdk/1533e035/rev1/diglib/pml48/r2.4.0/vdio/lef/pml48_1533c035_core_2pin.lef"
setFormField "Read LEF" "Cell Options" "Make New Cell Version"
formOK "Read LEF"

Ex: /db/DAYSTAR/design1p0/HDL/Milkyway => In this dir, we create mw ref lib (both for regular and Chameleon cells). We also put .tf file and mapping file in here to generate tlu+ files. Steps for doing this are shown below in tlu+ section.

--------
create/open design MW lib
--------------
Once we are done creating mw ref lib, we create mw design lib using DC. We run DC in topo mode, and create our design MW lib. We need to create the design lib only once, then we need to only open it for any subsequent run.

create_mw_lib -technology <tech_file> -mw_reference_library <ref_lib> my_mw_design_lib => creates design my_mw_design_lib with top level dir my_mw_design_lib, and subdir lib, lib_1, lib_bck within it
open_mw_lib my_mw_design_lib  => opens design my_mw_design_lib so that we can run cmds on it.

We can combine create and open MW in one cmd: create_mw_lib ... digtop -open

ex: create_mw_lib -technology /db/DAYSTAR/.../Milkyway/gs40.6lm.tf -mw_reference_library "/db/DAYSTAR/.../Milkyway/pml48MwRefLibs/CORE /db/DAYSTAR/.../Milkyway/pml48ChamMwRefLibs/CORE" -open my_mw_design_lib => done only once, when mw design lib doesn't exist. mw ref lib is the one created above using MilkyWay tool.
open_mw_lib my_mw_design_lib => just open mw lib for any subsequent run, as mw lib already exists.


#synthesize design, or do whatever we want to do on this mw design, then save MW design using this cmd: (save_mw_cel cmd doesn't work here, as it's supported only in ICC). MW db has netlist, synth constraints and optional fp, place, route data (if they exist).
write_milkyway -output digtop => this creates my_mw_design_lib/CEL dir, which has digtop:1 file. here :1 is the version number. If we use write_milkway cmd more than once, it creates additional design file and increments the version number. You must make sure you open the correct version in Milkyway; by default Milkyway opens the latest version. To avoid creating an additional version, use the -overwrite switch to overwrite the current version of the design file and save disk space.

-----
To load TLU+ file:
---
TLU+ is tech lookup table binary file, which is used by ICC to calc interconnect R,C values based on net geometry.
cmd: set_tlu_plus_files -max_tlu_plus <max_tluplus_file> -tech2itf_map <mapping_file> => sets pointer to tlu+ files assuming it's already been generated. tech2itf map file is needed to map names from .tf (technology file) file to .itf (interconnect technology format) file. We need the mapping file, as we used .tf file above in create_mw_lib cmd. That .tf file may have layer names different from .itf file. Since .itf files are used to generate .tlu+ files, names in .tlup files may be diff than ones in .tf file. so mapping file resolves this.
ex: set_tlu_plus_files \
    -max_tluplus /db/DAYSTAR/design1p0/HDL/Milkyway/tlu+/gs40.6lm.maxc_maxvia.wb2tcr.metalfill.spb.nlr.tlup \
    -min_tluplus /db/DAYSTAR/design1p0/HDL/Milkyway/tlu+/gs40.6lm.minc_minvia.wb2tcr.metalfill.spb.nlr.tlup \
    -tech2itf    /db/DAYSTAR/design1p0/HDL/Milkyway/mapping.file

To generate TLU+ files:
---
Generally, we have tech file (.tf) which is similar to lef tech file used in vdio. .tf file has all metal/via rules, complex drc rules, all layers, etc and is very elaborate. This is what we had at AMD.
For an ex, look in /db/DAYSTAR/design1p0/HDL/Milkyway/gs40.6lm.tf. It has following in it:
Technology      { name="gs40" unitLengthName="micron" ...  }
Tile    "unit"  { width=0.4250 height=3.4000 }
Layer   "MET1"  { layerNumber=10 minWidth=0.175 minSpacing=0.175 ... } => many more layers as poly, tox, hvt, bondwire, etc.
Layer   "VIA2"  { layerNumber=13 ... }
FringeCap 17    { number=17 layer1="MET6" layer2="MET1" minFringeCap=0.000010 maxFringeCap=0.000010 } =>b/w any 2 layers
DesignRule      { layer1="VIA1" layer2="VIA2" minSpacing=0 }
ContactCode "VIA23" { contactCodeNumber=2 cutLayer="VIA2" lowerLayer="MET2" upperLayer="MET3" ... }
and many more layers ...

Generally vendors provide only .itf (interconnect technology format). These .itf contain desc of process, thickness and phy attr of conductor and dielectric layers, via layers, etc. These are used to extract RC values for interconnects. These .itf are used to generate TLU+ files to be used by ICC by this cmd:
grdgenxo -itf2TLUPlus -i <abc.itf> -o <abc.tluplus> => -itf2TLUPlus option generates tlu+ file instead of nxtgrd file (nxtgrd file are used in star-rcxt tool. this is needed when running ICC in signoff mode)
ex: grdgenxo -itf2TLUPlus -i .../gs40.6lm.maxc_maxvia.wb2tcr.metalfill.spb.nlr.itf.eval -f /testcase/di3/techfiles/sp_di1/sr60/TLUPlus/6lmalcap/itfs/c021.format -o gs40.6lm.maxc_maxvia.wb2tcr.metalfill.spb.nlr.itf.eval.tlup

These tlu+ files have layer names same as those in .itf files. Since these names may not match the names in .tf files, we use mapping file that maps .tf layer/via names to .itf layer/via names. It's called as .map file or mapping.file or any other name. For an ex, look in /db/DAYSTAR/design1p0/HDL/Milkyway/mapping.file => It has all "capital letter" layer names mapped to "small letter" layer names. It also removes all layers except active, poly, met and via layers, as they are not needed.

------------------------
DC synthesis topo mode: So, the complete flow for DC synthesis in topo mode looks like this:
----

In .synopsys_dc.setup file in Synthesis dir, set search path to wherever you have .db files.
Then run dc_shell in topo mode: dc_shell-t -2010.03-SP5 -topo -f tcl/top.tcl | tee logs/top.log
 
#In dc_shell, run initial setup/analyze the normal way
source tcl/setup.tcl
source tcl/analyze.tcl

elaborate      $DIG_TOP_LEVEL
current_design $DIG_TOP_LEVEL
link
set_operating_conditions -max W_125_1.35 -library {PML48_W_125_1.35_COREL.db PML48_W_125_1.35_CTSL.db} => points to lib in search path
#set auto_wire_load_selection true => commented as no wlm (as we use tlu+ and net geometry to calc res/cap values)
#set_wire_load_mode enclosed => commented as no wlm

#### start of special cmds for running in topo mode ####
#open/create mw lib
set lib_exist [file exists my_mw_design_lib]
if {$lib_exist != 1} {
create_mw_lib -technology /db/DAYSTAR/design1p0/HDL/Milkyway/gs40.6lm.tf \
              -mw_reference_library "/db/DAYSTAR/design1p0/HDL/Milkyway/pml48MwRefLibs/CORE /db/DAYSTAR/design1p0/HDL/Milkyway/pml48ChamMwRefLibs/CORE" -open my_mw_design_lib
}
open_mw_lib my_mw_design_lib

#Enable Cell area and footprint checks (so that area of cell and footprint of cell are consistent) between logical(in link_library) and physical library(in MW db)
set_check_library_options -cell_area -cell_footprint
check_library

#set tlu+ file instead of WLM
set_tlu_plus_files \
    -max_tluplus /db/DAYSTAR/design1p0/HDL/Milkyway/tlu+/gs40.6lm.maxc_maxvia.wb2tcr.metalfill.spb.nlr.tlup \
    -min_tluplus /db/DAYSTAR/design1p0/HDL/Milkyway/tlu+/gs40.6lm.minc_minvia.wb2tcr.metalfill.spb.nlr.tlup \
    -tech2itf    /db/DAYSTAR/design1p0/HDL/Milkyway/mapping.file

check_tlu_plus_files => performs sanity checks on TLU+ files to ensure correct tlu+ and map file
#### end of special cmds for running in topo mode ####

#start with normal flow
set_driving_cell -lib_cell IV110 [all_inputs]
set_load 2.5 [all_outputs]

source tcl/dont_use.tcl
source tcl/dont_touch.tcl
...
compile_ultra -scan ...
...
#save final design from mem to MW lib (MW stores physical info of design), and name it as digtop
set mw_design_library my_mw_design_lib => to make sure design lib is set correctly
write_milkyway -output digtop -overwrite => Overwrites existing version of the design under the CEL view.

exit

-----------------------------------

Running ICC:
-----------
just like in DC, cp .synopsys_dc.setup file from the synthesis dir to the dir, where you are running ICC. It has all the same settings as DC, i.e It sources other tcl files from admin area. sets search_path to /db/../synopsys/bin, target_library and link library to PML*_CTS.db, and other parameters for snps ICC.

run ICC:
icc_shell -2011.09-SP4 -f tcl/top.tcl | tee logs/my.log => starts up icc

icc_shell> start_gui => to start gui from icc_shell. 2 gui may open: One is the ICC main window from where we can enter cmd on icc_shell built within this window. The other is ICC layout window, which opens up whenever we open/import design. From this window, we control and view PnR.

We can run ICC in 2 modes. Choose from File->Task in ICC layout window.
1. Design planning: Full chip planning/feasability/partitioning is done. Visibility is turned OFF for cells and cell contents. Top panel shows fp, preroute, place, partition, clk, route, pin assgn, timing, etc. Once we are satisfied, we partition top level design into blocks and do block level impl as shown next.
2. Block implementation: actual impl at block level is done. Visibility is turned ON for cells and cell contents. Top panel shows fp, preroute, place, clk, route, signoff, finish, eco, verification, power, rail, timing, etc.

#reset_design => removes all attr and constraints (dont_touch, size_only, ...)

top.tcl:
-------
#source some other files (same as in DC) => In this file set some variables, i.e "set RTL_DIR /db/dir" "set DIG_TOP_LEVEL  digtop" or any other settings

#create is needed only for the first time design is created in ICC. From next time, we just need to open the design.
create_mw_lib -technology /db/DAYSTAR/design1p0/HDL/Milkyway/gs40.6lm.tf \
              -mw_reference_library "/db/DAYSTAR/design1p0/HDL/Milkyway/pml48MwRefLibs/CORE /db/DAYSTAR/design1p0/HDL/Milkyway/pml48ChamMwRefLibs/CORE" -open my_mw_design_lib

open_mw_lib my_mw_design_lib => to open mw lib

#ICC can also directly open a mw db written by DC (as in DC topo), so no need to create/open new mw or import any netlist.
#open_mw_lib ../../Synthesis/digtop/my_mw_design_lib

set_tlu_plus_files \
    -max_tluplus /db/DAYSTAR/design1p0/HDL/Milkyway/tlu+/gs40.6lm.maxc_maxvia.wb2tcr.metalfill.spb.nlr.tlup \
    -min_tluplus /db/DAYSTAR/design1p0/HDL/Milkyway/tlu+/gs40.6lm.minc_minvia.wb2tcr.metalfill.spb.nlr.tlup \
    -tech2itf    /db/DAYSTAR/design1p0/HDL/Milkyway/mapping.file

check_tlu_plus_files

set mw_logic0_net "VSS"
set mw_logic1_net "VDD"

#read in verilog, vhdl or ddc format
#read_verilog -netlist ../Synthesis/netlist/digtop.v
#current_design $DIG_TOP_LEVEL
#uniquify
#link
#save_mw_cel -as $DIG_TOP_LEVEL

#all of the above can be replaced by this one liner
import_designs ../Synthesis/digtop/netlist/digtop.v -format verilog -top $DIG_TOP_LEVEL

#If we imported mw db from DC, then instead of importing netlist, we can open mw cel directly
#open_mw_cel $DIG_TOP_LEVEL => opens mw cel digtop written by DC. No need to specify path of mw lib, as path is set, whenever we open mw lib.

IO pad/pin placement:
--------------------
set_pad_physical_constraints => Before creating fp, we should create placement and spacing settings for I/O pads. These IO pads refer to analog buf cells that connect I/O pins to internal logic.
set_pin_physical_constraints => To constrain Pins. ICC checks to make sure constraints for both pads and pins are consistent.
set_fp_pin_constraints => sets global constraints for a block. If a conflict arises between the individual pin constraints and the global pin constraints, the individual pin constraints have higher priority.

To save pin/pad constraints:
write_pin_pad_physical_constraints <const_file> => saves all const applied using pad and pin cont cmd above

To read pin/pad constraints:
read_pin_pad_physical_constraints <const_file> => to read all pin/pad const

In our case, these pads are at top level, so we don't need pad const in digtop. We need pin const, only when the actual pin io file is not available. Once we get real io file with pin placement, we don't need to run this section. We just read in the pin def file.
 
create_floorplan: create a floorplan similar to what we do in VDI (for older ICC versions, use initialize_floorplan as create_floorplan is only supported from 2011 onwards)
------------------
create_floorplan => creates block shape, size and placement rows (target util, aspect ratio, core size, bdry to core spacing, etc.and std cell. placement rows are visible on zooming. places constrained pads first. Any unconstrained pads are placed next, using any available pad location. Then pins are placed. If pin location not specified, then pins are placed randomly and evenly dist along 4 sides of block.

#-control_type aspect_ratio | width_and_height | row_number | boundary> => The default control type is the aspect_ratio which indicates that the core area of the floorplan in the current Milkyway CEL is determined  by the ratio of the height divided by the width. The  width_and_height  control type indicates that the core area is determined by the exact width and height.
#-core_width <width> -core_height <height>=> Specifies the width and height of the core area in user units.  This option is  valid only if you specify the -control_type width_and_height option.
#-left_io2core <x1> -right_io2core <x2> -bottom_io2core <y1> -top_io2core <y2> => Specifies the distance between the left/right/bot/top side of the core area and the right/left/top/bot side of the closest terminal or pad.
create_floorplan => creates fp with default options, to fit in all cells.
create_floorplan -control_type width_and_height -core_width 180 -core_height 200 -left_io2core 8.5 -right_io2core 8.5 -bottom_io2core 8.5 -top_io2core 8.5 => specify fp size and spacing

#initialize_rectilinear_block => only for rectilinear blocks (L,T,U or cross-shaped). In this, pins are not touched at all.

##defining routing tracks. create_track to create tracks. report_track shows all tracks (usr or def). Generally, we'll see all metal layers in both X and Y dirn.

#write_def or write_floorplan to save fp into def or mw.
write_def -output fp_for_DC_topo.def => writed fp def, so that we can use this fp info in DC topo, to get better synthesized netlist.

#read_def or read_floorplan to import in a fp def file, which has some/all pwr routes, i/o pins and chip dimensions.
read_def chip.def => it adds the physical data in the DEF file to the existing physical data in the design. To replace rather than add to existing data, use the -no_incremental option

#pg connections
derive_pg_connection -power_net VDD -ground_net VSS => creates logical connection b/w pg nets in design to pg pins on stdcells
check_physical_constraints => check that logical lib (.db) and physical lib (mw) match. we see warnings about missing pg nets in fp
report_cell_physical -connection => reports all pin connections for all stdcells

Virtual flat placement:  This is for design planning/feasibility purpose only.
----------------------
help you decide on the locations, sizes, and shapes of the top-level physical blocks. This placement is “virtual” because it temporarily considers the design to be entirely flat, without hierarchy. After you decide on the shapes and locations of the physical blocks, you restore the design hierarchy and proceed with the block-by-block physical design flow.

set_fp_placement_strategy => sets parameters that control the create_fp_placement and legalize_fp_placement commands. These settings are  not applicable  to  other  placement  commands  or other parts of the flow.
create_fp_placement => performs a virtual flat placement of  standard  cells  and hard  macros.  It provides you with an initial placement for creating a floorplan to determine the relative locations and shapes  of  the  toplevel  physical  blocks


power planning: optional, only needed if we need to create straps/rings.
-------------
#set_fp_rail_constraints => defines PNS (Power network synthesis) constraints
set_fp_rail_constraints -add_layer -layer MET2 -direction vertical -max_strap 20 -min_strap 10 -min_width 0.4 -spacing minimum => -add_layer says to add 10-20 power straps on MET2 in vert dirn, with min_width of 0.4 units. -spacing says that spacing b/w pwr and gnd nets can be min spacing. Sometimes we want to route signals in b/w these pwr and gnd nets, so we may choose "-spacing distance" to specifically specify the distance.
set_fp_rail_constraints -add_layer -layer MET3 -direction horizontal -max_strap 20 -min_strap 10 -min_width 0.4 -spacing minimum => this adds horz straps in MET3

#set_fp_block_ring_constraints => defines the constraints for the power and ground rings that are created around plan groups and macros, when pg n/w is synthesized. This may not be needed for our purpose, since we don't have macros, around which we want to create rings
set_fp_block_ring_constraints -add -horizontal_layer METAL5 -vertical_layer METAL6 -horizontal_width 3 \
-vertical_width 3 -horizontal_offset 0.600 -vertical_offset 0.600 -block_type master -nets {VDD VSS} -block { RAM210 }

#synthesize_fp_rail command => synthesizes the power network based on the set_fp_rail_constraints cmd.
synthesize_fp_rail -power_budget 800 -voltage_supply 1.32 -output_directory powerplan.dir -nets {VDD VSS} -synthesize_power_plan => synthesizes fp rail

commit_fp_rail => commit the power plan to convert the virtual power straps and rings to actual power wires, ground wires, and vias.

create views:
-----------
#specifying min/max timing lib => "link_library" or "target_library" in .synopsys_dc.setup has the max lib only. We are not allowed to specify min lib there. If more than 1 .db files are specified in link/taget library, tool just looks through these .db files, and stops the first time, it finds the required cell. That's why we specify just the max lib files for both CORE and CTS cells.
#So, to specify min lib for min delay analysis, we need to use the "set_min_library" cmd => it associates a min lib with max lib, i.e to compute min dly, tool first consults the library cell from the max library.  If a library cell exists  with  the same  name, the same pins, and the same timing arcs in the min library, the timing information from the min library is used.  If the tool  cannot  find  a  matching cell in the min library, the max library cell is used.

set_min_library PML48_W_125_1.35_COREL.db -min_version PML48_S_-40_1.65_COREL.db => for core cells
set_min_library PML48_W_125_1.35_CTSL.db -min_version PML48_S_-40_1.65_CTSL.db => for cts cells

list_libs => shows all min/max lib. m=min, M=max. Make sure all paths, etc are correctly reported.

###setting mmmc flow: ICC uses multi scenario method to analyze and optimize these designs across all design corners and modes of operation.
A scenario is a combination of modal constraints (test mode or standby mode) and corner specifications (operating conditions of various PVT). create_sceanario defines one such mode/corner. In multicorner-multimode designs, DC/ICC uses a scenario or a set of scenarios as the unit for analysis and optimization. The current scenario is the focus scenario; when you set modal constraints or corner specifications, these typically apply to the current scenario. The active scenarios are the set of scenarios used for timing analysis and optimization.
Specify the TLUPlus libraries, operating conditions, and constraints that apply to the scenario. In general, when you specify these items, they apply to the current scenario.

###create scenario func_max, with max dly lib, and max rc tlu+.
create_scenario func_max => creates scenario, makes that scenario current and active
current_scenario => display the current scenario
current_scenario func_max => current scenario is set to func_max

#set_operating_conditions => defines op cond under which to time or optimize the design
set_operating_conditions W_125_1.35 -library {PML48_W_125_1.35_COREL.db PML48_W_125_1.35_CTSL.db}

#create_operating_conditions -name typ_lib_set -lib {PML48_N_25_1.5_COREL.db PML48_N_25_1.5_CTSL.db} -proc 0 -temp 25 -volt 1.8 => creates new op cond which may not be present. NOT needed for our purpose

#tlu+ set to max rc for both max/min corner
set_tlu_plus_files \
    -max_tluplus /db/DAYSTAR/design1p0/HDL/Milkyway/tlu+/gs40.6lm.maxc_maxvia.wb2tcr.metalfill.spb.nlr.tlup \
    -min_tluplus /db/DAYSTAR/design1p0/HDL/Milkyway/tlu+/gs40.6lm.maxc_maxvia.wb2tcr.metalfill.spb.nlr.tlup \
    -tech2itf    /db/DAYSTAR/design1p0/HDL/Milkyway/mapping.file

check_tlu_plus_files

#read sdc file that has constraints from DC. this replaces all lines in DC starting from "set_op_cond" to dont_use/touch, i/o dly, max_transition, create_clock, false_path/multicycle_path, disable_timing etc.
read_sdc
read_sdc ../../Synthesis/digtop/sdc/constraints.sdc

#check
check_timing => all paths should be constrained. If there are unconstrained paths, these should all be false paths as defined in false path file. run report_timing_requirements cmd to verify that.

###create scenario func_min, with min dly lib, and min rc tlu+.
create_scenario func_min
current_scenario => displays the current scenario
current_scenario func_min => current scenario is set to func_min

set_operating_conditions S_-40_1.65 -library {PML48_S_-40_1.65_COREL.db PML48_S_-40_1.65_CTSL.db}
set_tlu_plus_files \
    -max_tluplus /db/DAYSTAR/design1p0/HDL/Milkyway/tlu+/gs40.6lm.minc_minvia.wb2tcr.metalfill.spb.nlr.tlup \
    -min_tluplus /db/DAYSTAR/design1p0/HDL/Milkyway/tlu+/gs40.6lm.minc_minvia.wb2tcr.metalfill.spb.nlr.tlup \
    -tech2itf    /db/DAYSTAR/design1p0/HDL/Milkyway/mapping.file

check_tlu_plus_files

read_sdc ../../Synthesis/digtop/sdc/constraints.sdc

#reporting scenarios
all_scenarios => displays all the defined scenarios
report_scenarios => reports all the defined scenarios
set_active_scenarios {s1 s2} => sets s1,s2 to active. -all makes all scenarios active
all_active_scenarios => display the currently active scenarios
remove_scenario => remove the specified scenarios from memory (-all removes all scenarios)
check_scenarios => check all scenarios for any issues

place:
-----
#insert_port_protection_diodes => add diodes to the specified ports to your netlist to prevent antenna violations. should be done after fp and before place. report_port_protection_diodes reports the port protection diodes that are inserted in your design.

#pg connections
preroute_standard_cells -fill_empty_rows => Generates physical PG rails for standard logic cells. It connects all pwr/gnd rails in stdcells together, and then connects them to straps and pwr rings. "-fill_empty_rows" switch fills the CORE area or specified area with empty PG rails where cells can be subsequently placed, so that the entire region has PG rails.

#set active scenario to run setup opt for func_max and hold opt for func_min
set_scenario_options -setup true -hold false -scenarios func_max => Sets the scenario options for func_max to do opt on setup but not on hold (by default, it does opt on both setup and hold timing)
set_scenario_options -setup false -hold true -scenarios func_min => Sets the scenario options for func_min to do opt on hold but not on setup

set_active_scenarios {func_max func_min} => set both these scenario active. NOTE: .lib doesn't have process set to 1 for min lib (process=-3), so check_scenarios will warn. place, route, etc won't run. So, set active scenario to "func_max" only =>  set_active_scenarios {func_max}

# Add set_propagated_clock
set_propagated_clock [all_clocks]

###checks to be done prior to running place, so that any issues can be identified
check_design => check_design -summary cmd automatically runs on every design that is compiled. However, you can use the check_design cmd explicitly to see warning messages. Potential problems detected by this cmd include unloaded input ports or undriven output ports, nets without loads or drivers  or  with multiple drivers, cells or designs without inputs or outputs, mismatched pin counts between an instance  and its ref, tristate buses with non-tristate drivers, and so forth.

check_timing => checks timing and issues warnings. This cmd without any options performs the checks defined  by the timing_check_defaults variable. Redefine this variable to change the value.

check_physical_design -stage pre_place_opt => does phy design checks on design data for place. use "-stage pre_clock_opt" for pre cts, and "-stage pre_route_opt" for pre route.

# Perform timing analysis before placement (only run setup). When we do report_timing for setup, it reports setup for active scenarios. If one of those active scenarios doesn't have "setup=true", then nothing is reported. So, we provide scenario name as "func_max" during report timing (as "func_min is only valid for hold)
set rptfilename [format "%s/%s" timingReports ${DIG_TOP_LEVEL}_pre_place.rpt]
redirect $rptfilename {echo "digtop pre place setup run : [date]"}
redirect -append $rptfilename {report_timing -delay_type max -path full_clock_expanded -max_paths 100 -scenarios {func_max}}

#### place
# Add I/O Buffers
set_isolate_ports -driver BU120 -force [all_inputs] => force BU120 cell on all i/p ports
set_isolate_ports -driver BU120 -force [all_outputs] => force BU120 cell on all o/p ports
report_isolate_ports => reports all i/o ports and their isolation cells.

#place_opt -area_recovery  => Performs coarse placement, high-fanout net synthesis, physical opt, and legalization. Doesn't touch the clk n/w.
#-area_recovery => min area target
#-cts => enables quick cts, opt and route within place_opt, when designs are large. Should always run clock_opt eventually.
#-spg => uses Design Compiler's Physical Guide information to guide optimization. We can use either mw, .ddc or def file from DC, all of which have physical info. However, the guidance feature is only available in DC gui, so -spg will work only if DC mw or ddc has been gen using this feature. Also fp def from ICC should be imported into DC, so that DC can better synthesize the netlist based on fp. Just using DC topo mode doesn't mean that placement info can be read into ICC.

place_opt -area_recovery => may need to be run multiple times with diff options to fix violations.

#reports/checks
report_constraint
report_design
report_placement_utilization
create_qor_snapshot -name post_place => stores design qor in set of report files in dir "snapshot"
report_qor_snapshot => used to retrieve the qor rpt.

# Perform timing analysis after placement
set rptfilename [format "%s/%s" timingReports ${DIG_TOP_LEVEL}_post_place.rpt]
redirect $rptfilename {echo "digtop post place setup run : [date]"}
redirect -append $rptfilename {report_timing -delay max -path full_clock -max_paths 100}

# Add Spares. for scan, add flops with scan, otherwise non-scan flops.
insert_spare_cells -lib_cell {IV120L NA210L} -cell_name spares -num_instances 10 -tie => inserts spare cells group specified (IV120L,NA210L) 10 times spread uniformly across design with input pins tied to 0.
all_spare_cells => list all spare cells in design

#check and save
check_design
save_mw_cel -as post_place => we'll see a post_place:1 file in my_mw_design_lib/CEL dir.
write_def -output digtop_post_place.def
write_verilog ./netlist/digtop_post_place.v

#Post-placement optimization
psynopt => performs timing optimization and design rule fixing, based on the max cap and max transition settings while keeping the clock networks untouched. It can also perform power optimizations. It can remove dangling cells (to prevent that, use "set_dont_touch" cmd to apply dont_touch attr on required cells)

CTS
----
Prereq for CTS are:
1. check_legality -verbose => to verify that the placement is legal
2. pwr/gnd nets should be prerouted
3. High-fanout nets, such as scan enables, should already be synthesized with buffers.
4. By default, CTS cannot use buf/inv that have the dont_use attribute to build the clock tree. To use these cells during CTS, you can either remove the dont_use attribute by using the remove_attribute command or you can override the dont_use attribute by specifying the cell as a clock tree reference by using the set_clock_tree_references cmd.

CTS traces thru all comb cells (incl clk gating cells). However, it doesn't trace thru seq arcs or 3 state enable arcs
.
check_physical_design -for_cts => checks if design is placed, clk defined and clk root are not hier pins.
check_clock_tree => checks and warns if clk src pin is hier, incorrect gen clk, clk tree has no sync pins, and if there are multiple clks per reg.

#set_clock_tree_options => sets clk tree options
#-clock_trees clock_source
#-target_early_delay insertion_delay => by default, min insertion delay is set to 0.
#-target_skew skew
#-max_capacitance capacitance => by default, max cap is set to 0.6pf.(if not specified for design or not specified using switch here)
#-max_transition transition_time => By default, the max transition time is 0.5 ns
set_clock_tree_options -clock_trees sclk_in -target_early_delay 0 -target_skew 0.5 -max_transition 0.6 => set skew and tran

4 kinds of pins that are used in CTS. A pin may belong to more than 1 of these:
1. STOP pins: pins that are endpoints of clk tree. eg. clk pins of cells, clk pins of IP.
2. NONSTOP pins: pins that would normally be stop pins, but are not. The clock pins ofsequential cells driving generated clocks are implicit NONSTOP (not STOP) pins, as clk tree balancing needs to be done thru these pins. NOTE: this default behaviour is different than EDI, where ThroughPin has to be used in .ctstch file to force CTS thru the generated clks.
3. FLOAT pins: similar to STOP pins, but have special insertion delay requirements (have extra delay on clk pins). ICC adds the float pin delay (positive or negative) to the calculated insertion delay up to this pin. Usually, IP/Macro pins are defined as FLOAT pins so that we can add appr delay to the pin, equal to dly in the clk tree inside the IP/Macro.
4. EXCLUDE pins: clock tree endpoints that are excluded from CTS. implicit exclude pins are clk pins going to o/p ports or pins on IP/macro that are not defined as clk pins(i.e they are treated as data pins. We have to explicitly set these pins to stop_pins), or data pins of seq cells. During CTS, ICC isolates exclude pins (both implicit and explicit) from the clock tree by inserting a guide buffer before the pin. Beyond the exclude pin, ICC never performs skew or insertion delay optimization, but does perform design rule fixing. NOTE: In EDI, we use ExcludedPin in .ctstch file to specify exclude pins

#set_clock_tree_exceptions => sets clk tree exceptions on the pins above. We don't need this.
#-clocks clk_names => clks must be ones defined by "create_clock" and NOT by "create_generated_clock".
#-stop_pins stop_pin_collection
#-non_stop_pins non_stop_pin_collection
#-exclude_pins exclude_pin_collection
#-float_pins float_pin_collection => additional options for max/min_delay_rise/fall should be used.

#set_clock_tree_references => Specifies  the buffers, inverters, and clock gates to be used in CTS.
#-clock_trees clock_names => by default, it applies to all clks
#-references ref_cells => Specifies the list of buffers, inverters,  and  clock  gates for CTS.
set_clock_tree_references -references "CTB02B CTB15B CTB201B CTB20B CTB25B CTB30B CTB35B CTB40B CTB45B CTB50B CTB55B CTB60B CTB65B CTB70B" => In EDI, equiv cmd was "Buffer" used in .ctstch file


clock_opt => Performs  clock  tree  synthesis, routing of clock nets, extraction,  optimization,  and  hold-time  violation  fixing. Uses default wires (default routing rules) to route clk trees. We can define non default routing rules using "define_routing_rule" cmd, and use these routing rules with "set_clock_tree_options -routing_rule" cmd. NDR rules define what wires, routing layers, clk sheilding to use. Shielding is done using "create_zrt_shield" cmd, after doing clock_opt.
Prior  to the clock_opt command, use the set_clock_tree_options command to control the compile_clock_tree command. Briefly, it runs the following cmds under the hood:
o Runs the compile_clock_tree cmd => run multiple times using diff options
o Runs the optimize_clock_tree cmd
o Runs the set_propagated_clock command for all clocks from the  root pin, but keeps the clock object as ideal, Performs interclock delay balancing, if enabled (using set_inter_clock_delay_options command), Sets the clock buffers as fixed, Updates latency on clock objects with their insertion delays obtained after the compile_clock_tree, if enabled (using set_latency_adjustment_options  command)
0 Runs "route_group -all_clock_nets" cmd to route clk nets. "-no_clock_route" switch Disables routing of clock nets.

#running clock_opt in these steps is more flexible than clock_opt alone.
#clock_opt -only_cts -no_clock_route => performs CTS with opt only with no routing of nets
#clock_opt -only_psyn -no_clock_route => performs opt only with no routing of nets. This is used n a user-customized CTS flow where CTS is performed  outside of the clock_opt command
#route_group -all_clock_nets

clock_opt

## Post CTS optimization
clock_opt -only_psyn

route:
------
zroute is default router for ICC. Even though it's grid router, it allows nets to go off grid to connect to pins. Prereq for running zroute are: pwr/gnd nets must be routed and CTS should have been run.
We can run prerouter to preroute signal nets, before running zroute. zroute doesn't reroute these nets, but only fixes DRC.

check_routeability => to verify that design is ready for routing

#define routing guides
#create_route_guide -coordinate {0.0 0.0 100.0 100.0} -no_signal_layers {MET3 MET4 MET5 MET6}
#set_route_zrt_common_options -min_layer_mode hard -max_layer_mode hard => min/max layers are set to hard constraints, instead of soft constraints.
set_ignored_layers -min_routing_layer MET1 -max_routing_layer MET3 => max/min routing layers, by default these are hard constraints.
#define_routing_rule => to define nondefault routing rules (width,spacing,etc), both for routing and for shielding. These rules are assigned diff names, and then they are applied either on clk nets using "set_clock_tree_options" during CTS, or on signal nets and clk nets after CTS using "set_net_routing_rule".

#displays current settingsfor all routing ptions
set_route_zrt_common_options -verbose_level 1
report_route_zrt_common_options

#3 methods to route signal nets:
1. route_zrt_global => performs global routing. route_zrt_track => to perform track assignment. route_zrt_detail => to perform detail routing. Useful in cases where we want to customize routing flow
2. route_zrt_auto => performs all tasks in method 1 above. Runs fast so useful for analyzing routing congestion, etc.
3. route_opt => performs everything in method 2 above + postroute opt. To skip opt, add "-initial_route_only". Used for final routing.

The 3 substeps of routing are as follows:
-----------------
1. global routing:
----------
The global router divides design into global routing cells (GRC). By default, the width of a GRC is the same as the height of a standard cell and is aligned with the standard cell rows.
For each global routing cell, the routing capacity is calculated according to the blockages,
pins, and routing tracks inside the cell. Although the nets are not assigned to the actual wire
tracks during global routing, the number of nets assigned to each global routing cell is noted.
The tool calculates the demand for wire tracks in each global routing cell and reports the
overflows, which are the number of wire tracks that are still needed after the tool assigns
nets to the available wire tracks in a global routing cell.
Global routing is done in two phases:
phase 0 = initial routing phase, in which the tool routes the unconnected nets and calculates the overflow for each global routing cell
phase 1 = The rerouting phases, in which the tool tries to reduce congestion by ripping up and rerouting nets around global routing cells with overflows. It does it several times (-effort minimum causes this phase to run once while -effort high causes it to run 4 times)

routing report:
phase3. Both Dirs: Overflow = 453 Max = 4 GRCs = 449 (0.02%) => there are 453 wires in design that don't have corresponding track available. The Max value corresponds to the highest number of overutilized wires in a single GRC. The GRCs value is the total number of overcongested global routing cells in the design

2. track assignment:
------
The main task of track assignment is to assign routing tracks for each global route. During track assignment, Zroute performs the following tasks:
• Assigns tracks in horizontal partitions.
• Assigns tracks in vertical partitions.
• Reroutes overlapping wires.
After track assignment finishes, all nets are routed but not very carefully. There are many violations, particularly where the routing connects to pins. Detail routing works to correct those violations.

routing report: reports a summary of the wire length and via count.

3. detail routing:
------
The detail router uses the general pathways suggested by global routing and track assignment to route the nets, and then it divides the design into partitions and looks for DRC violations in each partition. When the detail router finds a violation, it rips up the wire and reroutes it to fix the violation. During detail routing, Zroute concurrently addresses routing design rules and antenna rules and optimizes via count and wire length.
Zroute uses the single uniform partition for the first iteration to generate all DRC violations for the chip at the same time. At the beginning of each subsequent iteration, the router checks the distribution of the DRC violations. If the DRC violations are evenly distributed, the detail router uses a uniform partition. If the DRC violations are located in some local areas, the detail router uses nonuniform partitions. It Performs iterations until all of the violations have been fixed, maximum number of iterations has been reached or It cannot fix any of the remaining violations.

routing report: reports DRC violations summary at the end of each iteration. a summary of the wire length and via count.

route_opt => does all 3 stages of routing + opt.

report_design_physical -verbose => to view PnR summary rpt.
verify_zrt_route => checks for routing DRC violations, unconnected nets, antenna rule violations, and voltage area violations on all nets in the design, except those marked as user nets or frozen nets.

extract_rc -coupling_cap => explicitly performs postroute RC extraction, with coupling cap. RC estimation is already done, when route_opt or any report_* cmd is run.

#report setup/hold timing, write def/verilog

#post route opt if needed
#route_opt -incremental
#route_opt -skip_initial_route -xtalk_reduction

STA:
----

#set all scenarios active
set_scenario_options -setup true -hold true -scenarios {func_max func_min scan_max scan_min}
set_active_scenarios {func_max func_min scan_max scan_min}

#report timing

#opt if needed
#for fixing DRV
set routeopt_drc_over_timing true
route_opt -effort high -incremental -only_design_rule

#for fixing hold
route_opt -only_hold_time

#for si
set_si_options -delta_delay true -route_xtalk_prevention true -route_xtalk_prevention_threshold 0.35
route_opt -skip_initial_route -xtalk_reduction

#focal_opt

Signoff:
--------
from routed db, we can do signoff driven design closure by 2 ways:
1. signoff_opt => auto flow. runs analysis and optimization.
2. run_signoff => manual flow. runs analysis

During analysis in signoff, StarRC is used to perform a complete parasitic extractionand stores the results as an Synopsys Binary Parasitic Format (SBPF) file or SPEF file. For timing, PT is run, and the timing info is passed back to ICC. When not in signoff, ICC internal engine is used for both extraction and timing.

set_primetime_options -exec_dir /apps/synopsys/pt/2011.12/amd64/syn/bin
set_starrcxt_options -exec_dir /apps/synopsys/star-rcxt/2011.12/amd64_starrc/bin

report_primetime_options
report_starrcxt_options

#scenarios
set_starrcxt_options -max_nxtgrd_file $max_grd_file -map_file /db/DAYSTAR/design1p0/HDL/Milkyway/mapping.file

#NOTE: still get errors, when running signoff_opt =>
#Information: Use StarRCXT path /apps/synopsys/star-rcxt/2011.12/amd64_starrc/bin. (PSYN-188)
#Error: The star_path option can only be used in conjunction with the star_max_nxtgrd_file option(s). (UIO-18)
#Error: The star_path option can only be used in conjunction with the star_map_file option(s). (UIO-18)

signoff_opt => run signoff optimization by ICC, based on results from signoff tool: starRC and PT.

#report_timing
#report_constraint -all_violators

save_mw_cel -as signoff

#if inc opt needed (to fix drv, hold time, si => use additional options with signoff_opt)
signoff_opt -only_psyn

#check_signoff_correlation => check the correlation between ICC and PT, and between ICC and StarRC.

Filler:
-------
# we insert filler cells before running signoff, so as to catch any issues
#insert_stdcell_filler => Fills empty spaces in standard cell rows with filler cells. the tool adds the filler cells in the order that you specify, so specify them from the largest to smallest. Run after placement.
#-cell_without_metal <lib_cells> or -cell_with_metal <lib_cells> => specify filler cells that don't contain metal or those that contain metal. Tool doesn't check for DRC if "cell_without_metal" is used.
insert_stdcell_filler -cell_without_metal {FILLER_DECAP_P12L FILLER_DECAP_P6L}
 
final_checks:
------------
#need to find checks for drc, antenna, connectivity

signoff_drc => performs signoff design rule checking. IC validator, or Hercules license reqd.

export_final:
------------
write_parasitics -format SPEF -output final_files/digtop_starrc.spef => writes spef file. If there are min and max operating conditions, parasitics for both conditions are written. In mmmc flow, the tool uses the name of the tluplus file and the temperature  associated  with the corner, along with the file name you specified, to derive the file name of the parasitic file (<tluplus_file_name>_<temperature>[_<user_scaling>].<output_file_name>).

write_def -version 5.5 -output final_files/digtop_final_route.def => writes def version 5.5

write_verilog final_files/digtop_final_route.v

--------------------------

*******************************************
For Running Place n Route in VDI:
---------------------------------------------------------------------
NOTE: our design are in terms of dbu
1 dbu=1 um before shrink. For LBC7, shrink=0.9, so 1dbu=0.9um.  For LBC8, shrink=0.35, so 1dbu=0.35um.

Cadence Encounter VDI (Virtuoso Digital Implementation):

Dir: /db/Hawkeye/design1p0/HDL/Autoroute/digtop/vdio

run Encounter VDI:
encounter -9.1 -vdi -log logs/encounter.log => brings up gui
encounter -9.1_USR2_s159 -vdi -log logs/encounter.log => use this version to avoid manufacturing grid issues.
For bsub: bsub -q gui -Is -R "linux" "encounter ......"

Help for encounter :
/apps/cds/edi/9.1/doc/soceUG/soceUG.pdf
/apps/cds/edi/9.1/doc/fetxtcmdref/fetxtcmdref.pdf
/apps/cds/edi/9.1/doc/encounter/encounter.pdf

On command line, type man for that cmd, or help cmd.
type exit to exit encounter.

script: run_encounter => brings up gui (removes all previous log files and dbs)
Then in tcl/top.tcl, you have multiple scripts for different phases of PnR.
-------------
Import Design
---------------
import_design.tcl => Import design => set up design for port into Encounter Digital impl system (EDI).
On gui: file->import design->basic
# Import LEF/Cap Tables/LIB/Netlist/Constraints => this file sets rda_input(*) for various parametrs.
loadConfig  /db/Hawkeye/design1p0/HDL/Autoroute/digtop/vdio/scripts/import.conf

Important parameters are :
ui_netlist => structural verilog netlist
ui_timelib.min/max =>min/max timing lib (ex: /db/pdk/lbc8/rev1/diglib/pml30/r2.5.0/synopsys/src/PML30_S_-40_1.95_CORE.lib)
ui_timingcon_file (constraints.sdc file) => same as pulled from DC (i.e set_load, set_driving_cell, set_dont_touch)
ui_*_footprint => provides names so that such cells can easily be identified.
ui_leffile => provide leffile for both tech and std cells.
Tech file: if its 3 layer metal, file will have pitch,width,spacing.etc for MET1/2/3 and various vias for VIa12 and VIA23. ex: /db/pdk/lbc8/rev1/diglib/pml30/r2.5.0/vdio/lef/pml30_lbc8_tech_3layer.lef
Std cell file: /db/pdk/lbc8/rev1/diglib/pml30/r2.5.0/vdio/lef/pml30_lbc8_core_2pin.lef

ui_core_* => core width,height,row_height,utilization,etc. => these values are bogus and not used for anything.
ui_captbl_file => lookup res/cap tables for typ,worst,best for M1/2/3 (cap for various width and space, min W=0.2um,S=0.2um, Ctot=0.35ff/um. It provides total cap, Coupling cap, Area cap and fringing cap. There's also an extended cap table) and for CONTACT/VIA1/2 (via resistance is about 5ohms. For M1/M2/M3 res is about 0.1ohm/um. Res is usually higher for M1 as it's thinner than top layers).We specify minC_minVia / maxC_maxVia cap table file.  NOTE: If QRC techfile specified, then that is used, and captbl file is ignored by tool.
ui_pwrnet,ui_gndnet => set pwr nets to VDD/VSS (for multi pwr domains, put all pwr supplies for that net). This will get connected to pwr/gnd pins found in stdcells lef file. Lef file has pin names and attribute to identify it as pwr/gnd pin.
ex: for VDD pin in lef file for a stdcell
PIN VDD => pin name is VDD
DIRECTION INOUT ;
USE POWER ; => pin is a pwr pin. If it was gnd pin, it would be USE GROUND).

eg: set rda_Input(ui_pwrnet) {VDD VDD_WL EXTVREF} => specifies 3 pwr nets (NOT pins) with names VDD, VDD_WL, EXTVREF. These are the nets that are routed during "sroute". These will get connected to pins in stdcells with "USE POWER" attribute, provided name of nets match the pin name (in lef) from stdcell. If names are different, then use "globalNetConnect" cmd explained below later. We can also specify nets with high terminal connections (large fanout) to get some default delay, load, etc  to save runtime.

#from Enc version 11 and onwards, import design looks different:
--------
source /db/Hawkeye/design1p0/HDL/Autoroute/digtop/vdio/scripts/import.conf
init_design => this loads parameters from import.conf
Important parameters in import.conf are :
set defHierChar {/}
set init_top_cell {DIG_TOP}
set init_verilog {../input/DIG_TOP.preroute.v} => same as ui_netlist
set init_pwr_net {V1P8D} , set init_gnd_net {DGND} => pwr/gnd nets specified
set init_lef_file {../input/MSL445_4lm_tech.lef  ../input/MSL445_CORE_2pin.lef ../input/MSL445_CTS_2pin.lef ../input/sshdbw00096016020.lef} => all lef files
set init_mmmc_file {mmmc.view} => optional: everything in "create views" section (in create_views.tcl) below is specified here.
-------

Remove assign from netlist => To remove assign from synthesized netlist or final PnR netlist, use this cmd:
setDoAssign  on -buffer BU110 => this places buffer BU110 wherever assign are found. Buffers are placed only if needed, else it will just move nets up/down the hier to get rid of assign. So, final netlist will be free of assigns. This cmd can also be placed in import.conf above.

NOTE: if there are any HardIP, .lef and .lib should be provided for those. If .lib is missing for any cell, Enc doesn't generate any error/warning, treats that cell as blockbox, and makes the path unconstrained going in/out of that cell. This is very dangerous, as these paths will not be optimized for timing and will show up as "unconstrained paths" in report_timing.

# Save Design after Import. this saves design so that it can be restored later from enc.dat/* (encounter database). After various phases of PnR, EDI puts files here in appropriate dir.
saveDesign ./dbs/import/import.enc -def  => we save it in import dir. Def file in "dbs/import/import.enc.dat/digtop.def.gz" has die area (initial area in import.conf file), initial rows, tracks, gcellgrids (gcell grid and tracks are taken to be equal to M2 pitch), NO vias, components (just the names of all components from synthesized verilog netlist with no placement info), unplaced pins(pin names derived from synthesized verilog netlist), unplaced special nets VDD/VSS and all unplaced nets.
import.enc has this line: restoreDesign ./dbs/import/import.enc.dat digtop

#dir structure of dbs:
dbs has dir for each step run. within each dir, it has .dat subdir which  has multiple files. For ex, in dbs/impor/import.enc.dat/, it has these files:
1. digtop.conf: same as import.conf, except that "ui_netlist" verilog netlist is now pointing to digtop.v.gz in import dir. If we are in route dir, then this netlist is set to digtop.v.gz in route dir. ui_core_height/width etc are also changed to the latest value depending on if floorplan has been run or not.
2. digtop.def.gz: has def file
3. digtop.v.gz: verilog generated after import (same as initial verilog from synthesis).
4. digtop.fp.gz: derived from  digtop.def.gz.
5. digtop.fp.spr.gz: just has vias/vdd/vss coords in it.
6. digtop.globals: sets global values for encounter to use
7. digtop.mode, digtop_power_constraints.tcl, enc.pref.tcl, digtop.opconds: all set*mode, pwr_mode encounter cmd, enc pref settings put here to be used later
8. digtop.place.gz, digtop.route.gz: intermediate place and route info to be used by enc.

#on screen o/p
On screen, we see VDI reads in .lef, .lib, and digtop.v netlist from synthesis tool. It reports total no. of cells and modules in verilog netlist. Then it reads .lib files and reports all cells found [all comb cells, seq cells, usable buffers (BU*), unusable delaycells/buffers (delay cells as BU112, clk tree buf as CTB* etc which are marked as dont_use)]. Reads in cap tables, sets few default parameters, and then saves verilog netlist and def file. Def file in "dbs/import/import.enc.dat/digtop.def.gz" has the initial floorplan size, rows, tracks, gcellgrid, Vias, all components(from digtop.v netlist), pins(all ports), special nets(VDD/VSS) and all other nets in the digtop.v netlist.

#freeDesign => used to remove lib and design-specific data from the Encounter session. It can be used as a shortcut in place of exiting and re-starting Encounter.
When you specify the freeDesign command, the Encounter software does not free collections but only invalidates them. For ex, after saveDesign, if we do freeDesign, it invalidates import.enc file, so that we can do loadConfig to load import.conf or do source *.enc to load any other file we wish.

#source => we can use this to source design from a particular step
source ./dbs/cts/cts_opt.enc => sources design from cts step => or restoreDesign ./dbs/cts/cts_opt.enc.dat digtop

#update_* => this can be used to update some variable that you set to some wrong value before.

Create Floorplan
-------------------
create_floorplan.tcl => add spacing b/w rows, define fp boundary, create ring, read pin locations,check fp, and then save design

# Add spacing between two rows => default is VDD then VSS then VDD and so on. 13.6dbu is spacing height and 2 says after every 2 rows. So, it owuld be VDD VSS VDD space VDD VSS VDD space VDD ... Keep spacing as 1 row height i.e 13.6dbu for LBC8
setFPlanRowSpacingAndType 13.6 2

# Define Die, IO, Core boundaries => die is whole chip, IO is inside die where we want IO pins, CORE is inside IO where we want logic to be placed. space b/w DIE/IO and CORE boundary can be used for power rings or left empty for signals to be routed. IO pins can be placed on DIE or IO boundary. CoreMargins are spacing b/w core-to-IO or core-to-die
#NOTE: core height needs to be a multiple of std row height (which in turn is a multiple of M1 pitch). Core width needs to be a multiple of M2 pitch. Boundary around core also needs to be multiple of M1 pitch for top/bottom and M2 pitch for left/right. For LBC8: M1/M2 pitch is 1.7du, so boundary around core needs to be a multiple of 1.7.

#( -b <die_x1> <die_y1> <die_x2> <die_y2> (co-ord of die) <io_x1> <io_y1> <io_x2> <io_y2> (co-ord of outside edge of I/O box) <core_x1> <core_y1> <core_x2> <core_y2> (co-ord of outside edge of core box) ) => all co-ord in du. so power ring gets into that area b/w die edge and I/O box edge
#-s <core_box_Height> <core_box_Width> <coreToLeft> <coreToBottom> <coreToRight> <coreToTop> => <coreTo*> specifies margin from outside edge of core box to left/right/bottom/top DIE/IO.
#-d <die_box_Height> <die_box_Width> <coreToLeft> <coreToBottom> <coreToRight> <coreToTop> => <coreTo*> specifies margin from outside edge of core box to left/right/bottom/top DIE/IO.

#-d is most convenient to use as you specify the outermost size. -s is convenient when we have power rings. -b is only used when we want to have much finer control.
floorPlan -site CORESITE -b 0.0 0.0 2700 1452 14 14 2686 1438 14 14 2686 1438 => draw die (0,0,2700,1452), then inside it draw the IO box leaving 14dbu space on all sides (14,14,2700-14,1452-14), then inside it we have CORE box (CORE box in this case is same size as IO box)
floorPlan -site CORESITE -s 2100.0 2100.0 14 14 14 14 => draw 2100 dbu size CORE and leave space of 14dbu on all sides b/w DIE/IO to core.
floorPlan -site CORESITE -d 2100.0 2100.0 14 14 14 14 => draw 2100 dbu size DIE and leave space of 14dbu on all sides b/w DIE/IO to core.  Full floorplan is 2100x2100, but stdcells can only be placed in core which is smaller by 14 on all sides.

#for rectilinear shape
setObjFPlanPolygon 0 0 0 750 600 750 600 900 1000 900 1000 0 0 0 => draws rectilinear shape staring from (0,0) to (0,750) to (600,750) to (600,900) to (1000,900) to (1000,0) to (0,0). Run this cmd after floorPlan cmd above. Then it modifies the fp area according to the polygon shape.
loadFPlan DIGTOP_mod_rect.fp => We can also use this to load rectilinear fplan. This loads the floorplan from fp file which has rows(DefRow), Track and GCellGrid defined. This fp file is generated first time by Tool after we manually adjust the boundary, and then it can be saved and then used for future use.

reportDesignUtil => It reports stdcell area utilization (area where stdcells are placed divided by allocated area of die (excluding placement blockages). This can approach 80% or more for dense design. It's always < 100% as outer area of die is for VDD/VSS lines, so no stdcells can ever be there. It also reports Core and Chip utilization (area of core where stdcells can be placed divided by area of die)
We can also get same utilization report thru GUI: goto Place->Query_density->Query_place_density

#To manually edit VDD/VSS routes, we use setedit cmd. Else we can use addRing cmd to automatically create rings.
#setedit: Updates the Edit Route form and the design display area. many options available:
setEdit -shape RING => Specifies the shape associated with the wire you draw. here, wire drawn will be always RING shape.
setEdit -use_wire_group {0|1} => Groups multiple wires from the same net, which decreases resistance. default is 0, meaning wires are not grouped.
setEdit -width_horizontal 3.5 -spacing_horizontal 1.2 => Specifies the width and spacing for horizontal wires.
setEdit -width_vertical   3.5 -spacing_vertical   1.2 => Specifies the width and spacing for vertical wires.
setEdit -nets {VSSS VDDS} => Specifies one or more nets for editing. Here we are going to edit only nets VDD and VSS.
setEdit -layer_vertical MET2 => specifies the layer for vertical wires.
setEdit -layer_horizontal MET3 => specifies the layer for horizontal wires.
setEdit -close_polygons {0|1} => Specifies whether to close a special route structure toward itself, using the Escape key. For the closing to complete, the ending wire segments must be drawn towards the start wire segments, but do not have to touch them. default is 0, meaning do not close.

#now routes can now be added and committed using these 2 cmds: editAddRoute create wire segments that start and stop at the specified points. The wire ends at the point specified by editCommitRoute.
editAddRoute x1,0 => Specify (x,y) of centerline for the start point or end point of the wire segment.. Continue doing this with more editAddRoute, until we are about to reach to startpoint. At that time, do
editCommitRoute x1,y1 => route is closed at x1,y1 which is the startpoint for rectangular shape.

# Create Ring  (get metal layer names from /db/pdk/lbc*/.../vdio/lef/*.lef file). Power pin names (VDD,VSS) are the pin names that appear in std cell lef files, so we specify those names so that sroute connects all of them.
#-nets => first net specifies the first net around the core, 2nd net specifies the second net around the core and so on. So {VDD VSS} means first put VDD around the core and then VSS (so VDD is inside while VSS is outside)
#-type core_rings => Creates core rings that follow the contour of the core boundary or the I/O boundary.
#-center 1 => center the core rings b/w IO pads and core bdry. If -center 0, then we need to specify the 4 offsets: offset_top, offset_bottom, offset_left, offset_right. Offset is from edge of the inner ring to Core/IO bdry
#-layer_*  Specifies which layer to use for each side of the ring or rings being created.
#-spacing_* Specifies the edge-to-edge spacing between rings for each side of the ring
#-width_* Specifies the width of the ring segments for each side of the ring
#-follow core|io => specifies whether to follow core or io bdry (default is core)
#-skip_side {top bottom} => skips putting ring on top and bottom as regular VDD/VSS lines will anyway get added there.
#NOTE: in Encounter versions before -9.1_USR2_s159, core bdry top is taken as the last VDD net if closest power ring is VDD, or VSS net if closest power ring is VSS. So, this causes offset in power rings. Even in later encounter versions, rings may get offset (with -center 1). just add an extra row in such cases, so that vdd/vss gets lined up correctly.

#ex with ring centered
addRing -nets {VDD VSS} -type core_rings -center 1 -layer_top MET1 -layer_bottom MET1 -layer_right MET2 -layer_left MET2 -width_top 4 -width_bottom 4 -width_left 4 -width_right 4 -spacing_top 1 -spacing_bottom 1 -spacing_right 1 -spacing_left 1

#ex with ring not centered, allows more control. use this to avoid spacing b/w i/o bdry and ring, so that no routes are inserted there. -offset specifies spacing from the edge of the inner ring to the boundary of the referenced object for each side of the ring.
addRing -nets {VDD VSS} -type core_rings -center 0 -offset_top 5 -offset_bottom 5 -offset_left 5 -offset_right 5  -layer_top MET1 -layer_bottom MET1 -layer_right MET2 -layer_left MET2 -width_top 4 -width_bottom 4 -width_left 4 -width_right 4 -spacing_top 1 -spacing_bottom 1 -spacing_right 1 -spacing_left 1 => offset 0 gets the ring starting from boundary of core.

NOTE: in newer versions, we can use "layer, width, spacing, offset" within array style for each side. Above way is obsolete.
i.e instead of "-offset_top 5 -offset_bottom 4 -offset_left 3 -offset_right 2", we do "-offset {left 3 bottom 4 top 5 right 2}"

###############################################
# Add stripe => Creates  power stripes within the specified area. These stripes connect all the way down to horizontal VDD/VSS lines on stdcells so that pwr supply to these regios in centre of core is still robust, preventing huge IR drop.
#-block_ring_top_layer_limit = Specifies the highest layer that stripes can switch to when encountering a block ring
#-block_ring_bottom_layer_limit = Specifies the lowest layer that stripes can switch to when encountering a block ring.

addStripe -block_ring_top_layer_limit MET3 -max_same_layer_jog_length 1.6 -padcore_ring_bottom_layer_limit MET1 -number_of_sets 1 -stacked_via_top_layer MET4 -padcore_ring_top_layer_limit MET3 -spacing 1 -xleft_offset 1345 -merge_stripes_value 0.85 -layer MET2 -block_ring_bottom_layer_limit MET1 -width 4 -nets {VSS VDD } -stacked_via_bottom_layer MET1 => width, layer, spacing and x-offset provided for the stripes. First VSS put then VDD starting from x=0.

#global net connect => used to connect pins/nets in inst to a specified global net (required only if we have more than 1 pwr net or gnd net, or names of pwr/gnd nets don't match with those of pwr/gnd pins in stdcells). type of pin needs to be specified, it can be one of any 4 types - tiehi, tielo, pgpin, net. 3 use scenarions for this cmd:
1. Connecting pins in a single instance to a global net:
ex: globalNetConnect NET123 -type pgpin -pin VDD -singleInstance Ictrl/FF_0_reg => connects pin VDD of flop to NET123
2. Connecting pins in a single/multiple instance to a global net:
ex: globalNetConnect VDD123 -type tiehi => tie "1'b1" in netlist to net VDD123.
ex: globalNetConnect VDD456 -type tiehi -pin OEN -inst PAD* -module {} => tie "1'b1" on -pin OEN of all PAD* inst to net VDD456.
3. Connecting nets to a global net:
ex: globalNetConnect NET123 -type net -net net1 {-hierarchicalInstance Ictrl/I_Reg | -all} => connects net1 to NET123
ex: globalNetConnect VDD123 -type pgpin -pin VDD -all => connects pg pin VDD of all instances to global net VDD123.

NOTE: "globalNetConnect -type tiehi|tielo" cmd connects 1'b1 or 1'b0 directly to power rails, and NOT to tie high/low cells. Ususally we want to isolate the input pins of cells from the power grid. This reduces noise coming from the power grid and reduces the possibility of damaging the gate oxide of the pin. To make connections to tie high/low cells, look in "warnings" section below.

#clearGlobalNets => clear everything
#globalNetConnect VDD_1P8 -type pgpin -pin VDD -inst * -module {} => adds new global net VDD_1P8 (1st arg) to pg pin VDD (2nd arg) found in all physical instances and modules of design. -type pgpin specifies that pwr/gnd pins listed with "-pin" param should be connected to global net VDD_1P8. VDD_1P8 is the Power ring around die specified as pwr_net in import.conf.
#globalNetConnect DGND -type pgpin -pin VSS -inst * -module {} => adds new global net DGND (1st arg) to pg pin VSS
#globalNetConnect VDD_WL_1P8 -type pgpin -pin VDD_WL -inst fram -module {} => connects for fram inst of any module. fram module lef file has VDD_WL as a power pin with multiple ports around fram bdry. VDD_WL_1P8 is the net in import.conf

#createRouteBlk => Creates a routing blockage object that prevents routing of specified metal layers, signal routes, and hierarchical instances in this area
createRouteBlk -box <llx lly urx ury> -layer {MET1 MET2} -exceptpgnet -name blk_1 => creates routing blkg named blk_1 to be applied on routing layers MET1 and MET2, in coords specified. -exceptpgnet Specifies that the routing blockage is to be applied on a signal net routing and not on power or ground net routing. usually needed on pwr rings so that VDIO doesn't route any signal nets there
ex: createRouteBlk -box 59.950 0.000 61.000 147.000 -layer {1} => created routing blkg on met1

#createPlaceBlockage => To prevent tool from putting any instance in this area. Usually done around HardIP.
createPlaceBlockage -box 779.3500 659.1000 1062.0500 813.8000 => routing blkg size will adjust automatically so that blkg always starts from a row height (i.e row cannot be partially blocked. It's either completely blocked or completely unblocked)

# sroute => (special routes) Routes power structures. Use this command after creating power rings  and  power  stripes. Throws some warnings related to def file that was created during import. sroute knows cell row height from CORESITE size in std cell lef file, so it routes VDD/VSS at CORESITE height.
#-nets {VDD VSS} => nets to sroute.
#-stripeSCpinTarget boundaryWihPin => extends unconnected stripes and standard cell pins to design boundary and creates a new power pin along the design boundary. Any overlaps with existing I/O pins at the design boundary are flagged as violations after the extension. This option is helpful, since Layout at top level connects to these power routes, so extending it all the way to edge, makes it easier to connect to global power supply.
sroute uses both layer changes and jogging to avoid DRC viol.
#-allowJogging 1 => jogs are allowed during routing to avoid DRC violations. If 0, then jogs are avoided as much as possible.
#-allowLayerChange 1 => Allows connections to targets on different layers. If jogs do occur, it says that preferred routing dirn should be used, wherever possible.

sroute -verbose => normal routing where power structures stop at core boundary or at power rings.
sroute -stripeSCpinTarget boundaryWithPin -allowJogging 0 -allowLayerChange 1 => routes power structures all the way to IO/die boundary.

#create power pins (not needed). -geom creates physical pin at specified co-ord, else only logical pin created.
#createPGPin -geom <layerId> <llx> <lly> <urx> <ury> -net <net_name> <pg_Pin_name>=> layerId is number 4,5,etc.

# Read pin locations => we load io location file from cadence when it places the pin in some order the first time. This is to ensure that next time, we invoke VDI, we get same pin location. Goto File->save->I/O file => save in digtop.save.io in current dir (select locations for now). this is to be done after PnR is done the first time. then we get pin locations, and we save it in this file.
#To move pin placement the first time VDI generates it, we can goto edit->Pin editor. then choose which pin to be placed where, and then save it using file->save->i/o file. Choose Save IO as "locations" and select "generate template IO file"
loadIoFile scripts/digtop.save.io (In this pin file we specify pin name, offset, width(thickness) and depth(length) and metal layer of pins. offset specified for left/right side is wrt bottom edge while for top/bot is wrt left edge, so even if size increases in x or y dirn, we don't need to change this file. Pins are always put at boundary of die. This is in contrast to def file, which have absolute coords.)
ex: (pin name="CLK10MHZ"    offset=3.2500 layer=3 width=0.2500 depth=1.4000 ) => this is for iopin offset by 3.25 db

# Read pin locations (these i/o pin loc comes from top level design from layout person. for 1st pass, comment it)
#defIn /db/BOLT/design1p0/HDL/Autoroute/digtop/Files/input/digtop_pins.def

NOTE: left/right pins (horizontal pins) are usually on MET3 (not MET1 which is lowest layer), while top/bot pins are on MET2. Pins are usually on top 2 metal layers for that block, as that allows more efficient routing.

# Set Fix IO so that placement does not move pins around (comment it for 1st pass, as these are put arbitrarily initially, so we don't want to fix these)
#fixAllIos => changes the status of all I/O pins and I/O cells to a FIXED state. -pinOnly option changes the status of all I/O pins only to a FIXED state, while -cellOnly changes the status of all I/O cells only to a FIXED state.

# Check Floorplan
setDrawView fplan => Sets the design view in the design display area to amoeba, fplan or place
checkFPlan -reportUtil -outFile ./dbs/floorplan/check_fp.rpt => Checks  the  quality  of the floorplan. This should be run on initial fp and the final fp (and also during intermediate steps for debug purpose). checks that can be performed are -feedthrough (feedthrough buffer insertion), -place (placement), -powerDomain (checks pwr domain) and -reportutil (reports target util and effective utilization). Look in check_fp.rpt for any issues (like pins not on tracks which result in inefficient layout, etc)
#utilization = stdcell_area/total_area (total_area is total area of die including empty rows, power rings, etc)
#density = stdcell_area/alloc_area (alloc_area is area of core where stdcells can be placed, so if we have power lines where there's not enough height for a row, we don't count that in alloc_area. similarly empty rows, power rings, area b/w core/die not counted. stdcell_area is sum total of all stdcells+IP_Blocks. Area of std_cells and IP_blocks is taken from lef file.
#So, utilization is always a lower number than density.
#NOTE: to get additional info, use below 2 cmds:
reportGateCount => can be used to report total no. of cells, and their area in terms of nd2x1 as well as absolute area.
checkDesign -noHtml -all -outfile ./dbs/floorplan/check_design_fp.rpt => run this after each step to get detailed info. checks design for missing cells, etc and is very comprehensive check. (-all performs all checks as dangling nets, floorplan errors, I/O pads/cells, nets, physical lib, placement errors, pwr/gnd connections, tieHi/Lo and if cells used have been defined in timing lib). It shows a concise report on screen and a detailed report in the outfile.
checkDesign report:
1. design summary (on screen): shows total no. of stdcells used, and their area.
1. design stats: On screen, it shows total no. of instances and nets, while in report it shows all cell types (as nand, or, sparefill, etc) used in design
2. LEF/LIB integrity check (in reports): checks whether cells used in design have correct lef/timing info.
3. netlist check: On screen, it shows IO port summary (total no of ports), while in reports, it shows Floating ports, ports connected to multiple pads (pads are what is on the bdry of chip, ports are connected to these pads inside the chip), Port connected to core instances (in our case, no. of ports connected to core cells should equal total no. of io ports (minus any floating ports) as each i/o port has a IO buffer, so it's connected to just one inst). There should be 0 o/p pins connected to pwr/gnd net (since nothing should be connected to PG directly, it's thru TieOff cells). Under "Instances with multiple input pins tied together", we see those gates whose i/p pins are tied to same net. Here we see all spare cells, as well some other cells whose i/p are tied together for opt. "Floating Instance terminals" and "Floating IO terms" should be 0. Note that the "Floating terminals" only reports a terminal as floating if it's not connected to any net. If it's connected to a net, which is floating, then the terminal would still be considered as not floating, but the net will be considered as floating, which will be reported in next section as "undriven net" => very important to check these for floating input on gates.
4. net DRC: On screen, we see no. of floating pins, and other DRC on pins, while in reports, we see "No Fanin","no fanout" and "High FO" nets. We may have "no fanin" nets for modules which have i/o ports, but they aren't being used inside the module. So, such ports get connected to "FE_UNCONNECTED_*" nets by encounter. These floating nets get carried from synthesized netlist, where they couldn't be removed because they were part of bus, or because they wer tied to 0/1, which is no longer needed (optimized away during PnR). NOTE: very important to check all "No Fanin" nets as any floating nets will be reported here, which may be input of gates.
5. IO pin check: In reports, we see all IO pins connected to which inst (all pins should be connected to BUF), "Instance with no net defined for any PGPin" (basically all inst starting from instances in digtop and then in modules as hey are referenced in digtop [no. of inst reported in design stats] are reported here as we don't have PGPin for inst, PG pins only exist in lef file of inst, not in verilog model).
6. Top level Floorplan check: on reports, it shows tracks which are offgrid, IO pins offtrack (some pins may get reported here as pin def file from layout folks may not have all pins on track, though they will still be on mfg grid), "Floating/Unconnected IO Pins" (these are also pins offtrack, but not sure why it gets reported in this section), etc. Look at final numbers for "Floating/Unconnected IO Pins" and "IO Pin off track" given at the end of report. That's correct number.

NOTE: checkDesign should be run at each stage, as it gives valuable information about the design. "checkNetlist -includeSubModule" is by default included as part of checkDesign (it only includes "netlist check" section from "checkDesign" and is a good concise report). Run  checkDesign after final netlist is generated to see full report.

# Save design after floorplan
saveDesign ./dbs/floorplan/floorplan.enc -def => note, this time we save it in floorplan dir. Def file in "dbs/floorplan/floorplan.enc.dat/digtop.def.gz" has new area, rows, tracks, gcellgrids, vias, components, placed pins(from io def file), placed special nets VDD/VSS (as sroute is done) and all nets.

place blocks: This is needed only if we have hard macros that we want to instantiate.
------------
#instantiate hard macro at specified loc
#setObjFPlanBox <objectType> <objectName> <llx> <lly> <urx> <ury> => Defines the bounding box of a specified object, even outside the core boundary. <objectType> can be Bump, Cell, Group, Instance, I/O cell, I/O pin , Layershape, Module, Net, etc.
#flipInst <Inst> {MX | MY} => flips inst. MX -> Flip with Mirror on X axis, MY -> Flip with Mirror on Y axis
#orientateInst <Inst> {R90 | R180 | R270 | MX | MY} => orientate. R -> Rotate, M -> Mirror
ex: setObjFPlanBox Module abc 100.00 200.00 400.00 500.00 => bounding box for module abc with lower left x=100, lower left y=200, upper right x=400, upper right y=500.
ex: setObjFPlanBox Instance fram 10 10 30 40 => bounding box for instance fram present in digtop.v netlist.
ex: orientateInst fram  R90 => rotate inst fram by 90 degrees.

#add halo to block. A halo is an area that prevents the placement of blocks and standard cells within the specified halo distance from the edges of a hard macro, black box, or committed partition in order to reduce congestion.
addHaloToBlock 5  10  95 15 fram => adds halo to fram instance (in um). <from left edge=5> <bottom edge=10> <right edge=95> <top edge=15>

#cutRow => Cuts site rows that intersect with the specified area or object. Needed so that there will be rows over that area or object for router to route VDD/VSS lines. If no options are specified, the cutRow command automatically cuts all blocks and all rows around the placement blockage. Instead of "cutRow", we can also do "sroute" after placing these IP blocks, so that sroute will automatically not put VDD/VSS lines over these IP.
#-area <box_coords> => Specifies the x and y coordinates of the box area in <llx> <lly> <urx> <ury> in which rows will be deleted.
#-selected => only rows interfering with selected objects will be cut
#-halo <space> => Specifies the additional space to be provided on the top, bottom, left, and right sides of the specified or selected object.

selectInst fram
cutRow -selected -halo 1 => specs that additional space of 1um should be provided on all sides of selected obj (fram). Also, all rows around placement blkg are deleted.

#we can also place an instance using these cmd:
selectInst I_ram/fram_inst => selects fram instance in digtop. here it's hard IP as felb800432
placeInstance I_ram/fram_inst 805 563 R0 => places at x,y =(805,563) with R0 orientation (cut sign on bot left of IP)
addRing .. -nets {DGND V1P8D} ... => adds power ring around fram
deselectAll => deselects the inst so that new cmds can be applied to whole design

#connect pwr pins on Blocks with power rings around them
sroute -connect blockPin  -blockPin all\
    -blockPinRouteWithPinWidth -jogControl { preferWithChanges preferDifferentLayer } \
    -nets { DGND V1P8D } -blockPinMinLayer 2 -blockPinMaxLayer 4

Create views
---------------
create_views.tcl => creates views for various operating modes (scan, functional,etc) of design with various operating conditions (PVT).  called as mmmc: multi mode multi corner. We specify bc/wc std cell library delay, and bc/wc Res/cap values. Then we "create_delay_corner" based on cell+wire delay. Then on top of that we create constraint_mode based on sdc files for func/scan/other modes. Then various "analysis_view" are created based on "delay corner" + "constraint_mode". Then we set appr analysis view for setup and hold corner.

# Create Library Sets => for worst case (P=weak, T=150C, V=1.65V), best case (P=strong, T=-40C, V=1.95V)
create_library_set -name wc_lib_set -timing [list /db/pdk/lbc8/rev1/diglib/pml30/r2.4.3/synopsys/src/PML30_W_150_1.65_CORE.lib \
                                                  /db/pdk/lbc8/rev1/diglib/pml30/r2.4.3/synopsys/src/PML30_W_150_1.65_CTS.lib]
#                                    -si     [list ../cdb/cdb_files/max.cdb]
create_library_set -name bc_lib_set -timing [list /db/pdk/lbc8/rev1/diglib/pml30/r2.4.3/synopsys/src/PML30_S_-40_1.95_CORE.lib \
                                                  /db/pdk/lbc8/rev1/diglib/pml30/r2.4.3/synopsys/src/PML30_S_-40_1.95_CTS.lib]
#                                    -si     [list ../cdb/cdb_files/min.cdb]

# Create Operating Conditions => just use ones in .lib files

# Create RC Corners to use in delay corner after this. Cap tables are specified to be used for extraction, when running this RC corner (default is to use Enc internal rules to extract RC). T is specified to derate R values in cap table (it overrides the value of Temperature in cap table). QRC tech file is used for sign-off RC extraction.  
create_rc_corner -name max_rc -cap_table    /db/pdk/lbc8/rev1/diglib/pml30/r2.4.3/vdio/captabl/4m_maxC_maxvia.capTbl -T 150
                              -qx_tech_file /db/pdk/lbc8/rev1/rules/parasitic_data/qrc/2009.06.01.SR6/4m/maxC_maxvia/qrcTechFile
create_rc_corner -name min_rc -cap_table    /db/pdk/lbc8/rev1/diglib/pml30/r2.4.3/vdio/captabl/4m_minC_minvia.capTbl -T -40
                              -qx_tech_file /db/pdk/lbc8/rev1/rules/parasitic_data/qrc/2009.06.01.SR6/4m/minC_minvia/qrcTechFile

# Create min/max Delay Corner. specifies lib set, rc corner and operating condition for this corner. -opcond specifies the op cond found in .lib file.
operating_conditions (W_125_2.5) { process : 3;
    temperature : 125;
    voltage : 2.5;
    tree_type : "balanced_tree";
  }
#-opcond_library Specifies the internal library name for the library in which the operating condition is defined. Every .lib file has a library name at the top. Note: this is NOT the file name, but library within that file. See liberty.txt file for info.
library ( MSL270_W_125_2.5_CORE.db ) {
 ...
}
So, in the lib set, if we specified multiple lib files, then for setup/hold analysis, tool picks up default op cond in each lib set when it's called. But if we want to force a particular op cond, we specify the opcond_library where it will look for opcond, and then use that P,V,T cond for particular corner.  We specify *CORE.db but could have specified *CTS.db too, since both of them have that op cond.
create_delay_corner -name max_delay_corner -library_set wc_lib_set -opcond_library PML30_W_150_1.65_CORE.db -opcond W_150_1.65 -rc_corner max_rc
create_delay_corner -name min_delay_corner -library_set bc_lib_set -opcond_library PML30_S_-40_1.95_CORE.db -opcond S_-40_1.95 -rc_corner min_rc

# Create Constraint Mode => for this netlist, we create two modes: functional and scan. NOTE: all files same as from synthesis.
create_constraint_mode -name functional -sdc_files \
    [list /db/Hawkeye/design1p0/HDL/Synthesis/digtop/tcl/env_constraints.tcl \ => env constraints (i/o load, i/p driver)
          /db/Hawkeye/design1p0/HDL/Synthesis/digtop/tcl/dont_use.tcl \ => dont_use (optional)
      /db/Hawkeye/design1p0/HDL/Synthesis/digtop/tcl/dont_touch.tcl \ => dont_touch (optional)
          /db/Hawkeye/design1p0/HDL/Synthesis/digtop/tcl/clocks.tcl \ => clk defn
          /db/Hawkeye/design1p0/HDL/Synthesis/digtop/tcl/constraints.tcl \ => all design constraints = i/o delays
          /db/Hawkeye/design1p0/HDL/Synthesis/digtop/tcl/gen_clocks.tcl \ => generated clk defn
          /db/Hawkeye/design1p0/HDL/Synthesis/digtop/tcl/case_analysis.tcl \ => scan_mode set to 0 (only if scan present)
      /db/Hawkeye/design1p0/HDL/Synthesis/digtop/tcl/false_paths.tcl \
      /db/Hawkeye/design1p0/HDL/Synthesis/digtop/tcl/multicycle_paths.tcl]
    
create_constraint_mode -name scan -sdc_files [list ./scripts/scan.sdc]
#scan.sdc has env constraints (i/p driver, o/p load), clk defn for scan clk (on port designated for scan clk) with a slower cycle than func clk, case analysis with scan_mode set to 1, and all design constraints (i/o delay redefined wrt scan clk). Only difference in scan sdc  (compared to functional sdc) is that no false path file is needed as there is only single clk (scan clk) when scan mode is set to 1. Also, i/o delay specified here is wrt scan clk, whereas in func, all i/o delay were wrt func clk

NOTE: instead of using all these *.tcl files from Synthesis dir, we can use .sdc file generated in sdc/constraints.sdc using write_sdc command. Be careful though to remove "set_ideal_network", "set_false_path -from scan_enable", "set_clock_uncertainty", "set_resistance" from internal nets, "set_load" from internal nets, etc from this sdc file so that it can be used in PnR. Or safer approach is to just use all constraints tcl files separately and not rely on sdc file. In DC-topo, constraints file has resistance/load on each net of design, causing EDI to pick these up, instead of calc res/cap for each net. See Synthesis_DC.txt. also set_units cmd causes differnen cap/time units to be used in cdns/snps tools, so be careful. See in sdc.txt. Also, sdc generated by synopsys has "-library *.db" for set_driving_cell cmd. This causes warning as "Could not locate cell IV110 in any library for view MIN" in encounter, as when reading sdc file for MIN corner, there's no MAX corner db file avilable, causing those warnings. Best approach is to manually remove any reference to lib/db files, so that same sdc files can be used for all MIN/MAX/NOM corners.

# Create Analysis Views => now create 4 views: func(max/min) and scan(max/min)
create_analysis_view -name func_max -delay_corner max_delay_corner -constraint_mode functional
create_analysis_view -name func_min -delay_corner min_delay_corner -constraint_mode functional
create_analysis_view -name scan_max -delay_corner max_delay_corner -constraint_mode scan
create_analysis_view -name scan_min -delay_corner min_delay_corner -constraint_mode scan

#NOTE: update_library_set, update_rc_corner, update_delay_corner, update_constraint_mode, update_analysis view can be used to update any of these variables.
#report_case_analysis can be done to see what values of pins are associated with diff analysis views. This is useful to verify that all views are correct. Sometimes, tools don't pick up constraints in *.tcl files and just ignore them, if it's not the expected syntax
#report_path_exceptions can be used to see list of all false paths used by VDIO.
#report_ports -pin [all_output] => to report caps+external_delay on all o/p ports. similarly for i/p ports with [all_input]. This can be used to verify if sdc files were loaded properly in all views (check thse values for both func_mode and scan_mode)
#report_ports -pin [get_ports {ENABLE_PORT1}] => to report for a specific port

# Save design after floorplan => save in views dir
saveDesign ./dbs/views/views.enc

Place
------
place.tcl => apply more constraints, set view, perform timing analysis, check design, attach bufs,  then do placeDesign, and check and save
# Apply additional constraints
set timing_enable_genclk_edge_based_source_latency false => Controls  how  the  software  chooses generated clock source latency paths. When set to false, the software does not check paths for the correct cause-effect relationship. We should set it to "true" so that we can see if all generated clocks have correct rise/fall relation with source clk. latency for generated clock is chosen as "0" for gen clk edges which don't have correct relationship with source clk. Ex: if gen clk is div by 1 and it's a +ve edge clk, then fall edge of gen clk will generate error (and hence have 0 latency) as fall edge of source clk can't generate fall edge of gen clk.

# Do placement, CTS and Route in Functional mode. We use scan mode briefly during CTS (to get clk tree) and then again go to func mode. We goto scan mode after place during STA/SIGNOFF timing analysis.
set_analysis_view -setup func_max -hold func_min => Defines  the  analysis views (func_max and func_min only) to use for setup and hold analysis and optimization. Here cap tables are used for wire res/cap. On screen, it shows what files it used for each view. sdc file is read here for the first time, so any errors/warnings found in sdc file syntax are reported here.
#since views are set to func mode, all_constraint_mode active at this time are only func mode. scan mode is not active. We can type "all_constraint_modes -active" to see all active modes.

### print some useful reports before doing placement
#report_clocks => This reports all clks (gen too) with their waveforms. If something is oncorrect, it needs to be fixed
#check_timing -verbose timingReports/check_timing.rpt => This reports any problems with any clks. It shows all flops with no clks, timing loops, unconstrained paths, ideal clks and problem b/w master and gen clks. It can be used to find out any mismatches b/w PT timing run and VDIO run (especially if some paths still have setup/hold violations in PT, it's most likely due to unconstrained paths in VDIO)
#report_path_exceptions can be used to see list of all false paths used by VDIO.

# Perform timing analysis before placement. timeDesign Runs Trial Route, extraction, and timing analysis, and generates detailed timing reports. The generated timing reports are saved in ./timingReports directory or the directory that you specify using the -outDir  parameter. It saves reports for setup/hold for reg2reg, reg2out, in2out, clkgate (for paths ending in clk gating). options are -prePlace | -preCTS | -postCTS | -postRoute [-si] | -signoff [-si] | -reportOnly. Only -signoff uses QRC extraction and SignalStorm delay calc, others use native extraction. -si can only be used with -postroute and -signoff option. It generates glitch violation report and incremental SDF for timing analysis.
timeDesign -prePlace  -prefix digtop_pre_place => running setup, so uses func_max view.

# check design before placement
#check_timing -verbose timingReports/check_timing.rpt

checkPlace ./dbs/place/check_place_pre_place.rpt => Checks FIXED and PLACED cells for violations, and generates violation rpt in file specified. If no o/p file specified, summary report is shown, which shows placed and unplaced instances and density.
#On the screen (and also in log/encounter.log file), it shows total no. of unplaced instances, which should equal the no. of instances in the *_scan.v netlist generated from DC, which is fed into VDI (in file script/import.conf as ui_netlist). There is a script in  ~/scripts/count_instances.tcl to count total no. of leaf cells in DC. The gate count from this script should equal no. of unplaced instances in VDI.
#Other way, if you dont want to use the script is to goto DC reports and look at reports/digtop.scan.area.rpt file which shows total cell area in terms of nd2x1 gates. In VDI log/encounter.log file, look at placement density numerator area. divide this by area of nd2x1 gate (by looking at nd2x1 gate area from leffile), and you get the total no. of gates in terms of nd2x1.

checkDesign -noHtml -all -outfile ./dbs/place/check_design_pre_place.rpt => checks design for everything.

#add placement obstruction incase we need to add diodes or other IP. After done with obstruction, we can delete them
createObstruct <x1 y1 x2 y2> -name ANT_RESV => block standard cell placements in box formed by co-ords provided, and given name ANT_RSVD. this is so that any subsequent placement doesn't place any cells here.

#add antenna diodes to i/o pins. We do it before placing anything, since we specify exact location where we want to add diodes.
#2 ways: one by using below script (doesn't work with arrays), and other by using attachDiode cmd explained later.  
script by cadence to add diodes to all input + output pins: <EDI_install>/share/fe/gift/scripts/tcl/userAddDiodesToIOs.tcl
script by cadence to add diodes to all input pins:          <EDI_install>/share/fe/gift/scripts/tcl/userAttachIoDiodesToInputs.tcl
These scripts have procedures, which can be called as below:
encounter > userAttachIoDiodesToInputs AP001L => adds AP001L to all inputs near to where the input ports are.
3 main cmds in these scripts:
1. addInst -cell AP001L -inst I_GPIO_user_added => add an instance of AP001L and name it (still unplaced)
2. placeInstance I_GPIO_user_added 1550 8 -placed => place instance of AP001L at (1550,8)
3. attachterm I_GPIO_user_added A I_GPIO[7] => A is diode pin while I_GPIO[7] is i/p port. Connect these terminals

# Add I/O Buffers
#attachIOBuffer => Adds  buffers  to  the I/O pins of a block and places the buffers near the I/O pins. Buffers are attached and then some of them are flipped to match row orientation (for VDD/VSS hookup).
#IMP: we need to use -markFixed with attachIOBuffer before running place, else place will remove many of them.
#-in or -out => specifies cell name of input or output buffer from the lib.
#-markFixed =>Marks newly-inserted buffers as Fixed.
#-port =>Prepends the port name to the name of the net or instance created.
#-suffix <suffxName> =>Appends a string to name of the net or instance created.
#-selNetFile <selNetFileName>=> Specifies the file that contains the names of nets (or ports in our case) to include in the buffer attachment operation.
#-excNetFile <selNetFileName>=> Specifies the file that contains the names of nets (or ports in our case) to exclude in the buffer attachment operation.
This is useful when we want to add one set of buffers to few nets, and other set of buffers to all other nets. with exclude, we can use just 1 file.
# Add BU140 on all inputs that do not go to the scan isolation gate (as scan iso already has 4x and gates to its inputs)
attachIOBuffer -port -suffix "_buf" -in  BU140  -markFixed -selNetFile ./scripts/in_bu140_list.txt

# Add 10X buffer on select outputs
attachIOBuffer -port -suffix "_buf" -out BU1A0M  -markFixed -selNetFile ./scripts/out_bu1a0m_list.txt

# Insert BU140 on all outputs that do not have the 10X buffer
attachIOBuffer -port -suffix "_buf" -out BU140  -markFixed -selNetFile ./scripts/out_bu140_list.txt

#To just attach buffers to all i/o ports, don't use any netfile.
attachIOBuffer -port -suffix "_buf" -in  BU140L  -markFixed
attachIOBuffer -port -suffix "_buf" -out  BU140L  -markFixed

# Set Fix IO so that placement does not move pins around
fixallios

# at this stage, reportGateCount should show cell count to be equal to gates from synthesis + IO buffers added.
reportGateCount

# Scan Trace
#specifyScanChain chain1 -start sdi_in -stop U19/B =>Specifies  a  scan  chain  or  group in a design, and gives it a name (ex: chain1 here). -start/stop specifies starting and stopping scan pin names (or inst i/p or o/p pin names).
#scanTrace -lockup -verbose => Traces  the  scan chain connections and reports the starting and ending scan points and the total number of elements in the scan chain. -lockup implies that tracing detects lockup latches automatically. -verbose prints cell inst names of scan chain.  used after specifyScanChain cmd.

# Place standard cells and spares. placeDesign first deletes buffertree to get rid of unwanted buffers/inverters. Reads all analysis views, reports total stdcell (after deleting buffers), does spec file integrity, moves/flips instances, and then runs placeDesign cmd which does a trial route. It looks for obstruction in Vertical/Horizontal dirn, and shows final congestion distribution. It does resizing, buffering, other DRV fixes(max cap, max tran,etc), calcualtes delays, fixes timing and then reclaims area by deleting/downsizing cells. It keeps on refining placement and building a congestion distribution map, until it's efficient placement.

setPlaceMode -timingdriven true -reorderScan false -congEffort high -clkGateAware true -modulePlan true =>
placeDesign -inPlaceOpt -prePlaceOpt => Places  standard  cells based on the global settings for placement, RC extraction, timing analysis, and trial routing. pre-placed buffer tree, etc are removed and optimized.
#-inPlaceOpt  = Performs timing-driven placement with optimization. enables the in-place optimization flow
#-noPrePlaceOpt = Disables the pre-placed buffer tree removal ( or pre-place optimization during the placement run). same as -incremental

# check and save design after placement
checkPlace ./dbs/place/check_place.rpt
checkDesign -noHtml -all -outfile ./dbs/place/check_design_place.rpt
saveDesign ./dbs/place/place.enc

# Perform timing analysis after placement
timeDesign -preCTS -prefix digtop_post_place
-----------
# Add Spares, repeat steps creating a spare module (containing some gates) and then placing it repeatedly. find name of available gates from ATD page
#-clock <net_name> => specifies clk net to connect to clk pins of seq cells in spare module. Usually we do this to offer balanced clk tree even when spare flops are added during eco. Otherwise, extra load on clk net due to these spare flops may cause some other paths to fail hold/setup, which may not be fixable by metal only change.
#-reset <net_name>:<pin_name> =>  specifies reser net to connect to reset pins of seq cells. If this option not used, then tieLo option should be used to tie reset pins, else they will be left floating.
#-tie <tie-cell-name> => specifies tie-hi and tie-low cells to add to spare module. w/o this, all pins are connected to 1'b0 or 1'b1 instead of being connected to tie-hi/tie-lo cell o/p.
#-tieLo <pin_names> => default is to tie pins high, unless specified using tieLo.
createSpareModule -cell  {IV120 IV120 IV120 IV120 BU120 BU120 BU120 BU120 AN220 AN220 AN220 AN220 NA220 NA220 NA220 NA220 OR220 OR220 NO220 NO220 EX220 EX220 MU121 MU121 LAL20 TDB21 TDB21 TDB21}  -tie TO010 -tieLo {TDB21:CLRZ  LAL20:CZ} -moduleName spare_mod1
#-area gives the total area coord where we want to place spares. -util is obselete parameter
placeSpareModule -moduleName spare_mod1 -offsetx 50 -offsety 300 -stepx 400 -stepy 700 -area { 15 15 2700 1400 }

NOTE: there are designs where we have spare module in RTL itself. In such case, we don't need to create spare module or place it separately in encounter. We run this: (we can use "specifySpareGate" cmd in eco script too, as that is where we need this spare gate info to do eco gate substitution)
#specifySpareGate -inst *Spare* => This lets encounter understand that this isntance is spare module and all gates in it are spare cells, so that it can be treated accordingly.
#specifySpareGate -inst I_scan_iso_out/g1453 => This adds "spare" property on this gate (which is not in spare module) so that it can be used as spare gate during eco.
#set_dont_touch *Spare*/* true => this is so that the tool doesn't remove the gates in Spare module.

# check and save design after placement with spares
checkPlace ./dbs/place/check_place_spares.rpt
checkDesign -noHtml -all -outfile ./dbs/place/check_design_place_spares.rpt
saveDesign ./dbs/place/place_spares.enc
-----------
NOT NEEDED
#optimizations: optDesign optimizes setup time (for worst -ve slack path, and then tries to reduce total -ve slack), corrects drv (for max_tran and max_cap viol), then if specified, corrects holdtime, opt useful skew, opt lkg power and reclaim area. In MMMC mode, it opt all analysis views concurrently. It uses techniques as add/delete buffer, resize gate, remap logic, move instance, apply useful skew.
#optDesign -preCTS|-postCTS|-postRoute -drv|-incr|-hold -prefix <fileNamePrefix> -outDir <dir_name> => w/o any options, it fixes setup and drv violations. -incr can only be used after running optDesign by itself to fix setup viol. -drv fixes drv, while -hold fixes hold viol. -drv|-incr|-hold can only be used one at a time. Default dir is timingReports for writing timing reports. In MMMC mode, optimizes all analysis views concurrently.

# Post-placement optimization => only if needed, repeat steps
setOptMode -effort high => effort level (default is high)
setOptMode -simplifyNetlist false => if true, simplifies netlist by removing dangling o/p, useless/unobservable logic, spares, etc.
setOptMode -fixFanoutLoad true => causes max FanOut design rule violations to be repaired (by default, drv don't fix these)
optDesign -preCTS -prefix digtop_post_place_opt => repairs design rule violations and setup violations  before clock tree is built. -prefix specifies a prefix for optDesign report file in timingReports/<prefix>_hold.summary, etc.
#optDesign -preCTS -drv -prefix digtop_post_place_opt => -drv (design rule violation) corrects max_cap and max_tran violations
#optDesign -preCTS -incr -prefix digtop_post_place_opt => -incr performs setup opt

# check and save design after post-place optimization
checkPlace ./dbs/place/check_place_opt.rpt
checkDesign -all -outfile ./dbs/place/check_design_place_opt.rpt
saveDesign ./dbs/place/place_opt.enc

# Perform timing analysis after placement
timeDesign -preCTS -prefix digtop_post_place_opt

#Save netlist post-placement optimization
#saveNetlist => this saves netlist from top lvel to leafcells. options:
#-excludeCellInst {SPAREFILL4 DECAP10 ..} => excludes specified logical or physical cells. put cell names in {...} or "...".
#-includePhysicalInst : Includes physical instances, such as fillers. Fillers are present in top level module. Physical cells are not present in .liberty files but only in .lef, so by default they are not included in netlist. This is how EDI figures which are physical cells by looking for cells in .lef which are missing in liberty files. These cells if put in verilog netlist will not run timing as there is no timing info for these cells. However, diodes and some other cells are present in liberty files, even though they are physical only cells. This helps them be in netlist so that we can run lvs for schematic vs layout when imported into icfb. Filler cells are just cap, so lvs complains about missing DCAP in schematic, which we then manually add to schematic to make it lvs clean.
#-includePhysicalCell {FILLER5 FILLER10 ..} includes the mentioned physical cell instances in the netlist.
#-excludeLeafCell => writes all of the netlist, but excludes leaf cell definitions in the netlist. This is how the netlist normally looks.
#-includePowerGround => Includes power and ground connections in the netlist file. This will add pwr nets (VDD/VSS) to all cells.

saveNetlist ./netlist/digtop_post_place_opt.v => this saves netlist from top level to leafcells.
-------

CTS => inserts clock tree, synthesize scan clk tree and mclk clk tree.
-----
to view clktree in gui, 2 options:
1. to view tree in text tree format: goto clock->browse clk tree ->set clock to spi_clk or whatever, select Preroute and then OK. Shows the whole hier in tree like structure.
2. to view the actual layout of clktree in gui, goto Clock->Display->Display_clock_tree. Choose "all clocks", display "all level" or start with "selected level 1" and then move to 2nd level and so on.

# Start Clean
#freeDesign

# Import post-place design
#source ./dbs/place/place_opt.enc

#creating clk tree spec file: we can either manually create this file or tool can create one for us from the SDC constraints in effect (here func view is in effect, so func.sdc used).
createClockTreeSpec -file func_clktree.ctstch => SDC mapping to CTS is done as follows. (SDC cmd -> CTS cmd)
#create_clock -> AutoCTSRootPin
#set_clock_transition ->  SinkMaxTran/BufMaxTran  (default is 400ps)     
#set_clock_latency -> MaxDelay(default=clock period), MinDelay(default=0)
#set_clock_uncertainty -> MaxSkew (default=300ps)
#create_generated_clock -> ThroughPin (adds necessary ThroughPin stmt)

# Insert Clock Tree, we have 4 separate clk trees here, but we use spi_clk to build CTS in scan mode, so that only 1 clk tree is built. This covers all clks. If we aeren't in scan_mode, then we need to build 4 separate clk trees.
set_case_analysis 1 scan_mode_in

#setCTSMode is used in lieu of putting these settings in clk tree spec file. This cmd should be run before running specifyClockTree. Settings in clk tree spec file (in specifyClockTree) takes priority.
setCTSMode -useLibMaxCap true => set  all setCTSMode parameters before running the specifyClockTree command.
#-useLibMaxCap true => Uses the maximum capacitance values specified in the timing library.
#-routeBottomPreferredLayer 4 => Specifies the bottom preferred metal layer for routing non leaf-level nets.Default= 3
#-routeTopPreferredLayer 6 => Specifies the top preferred metal layer for routing non leaf-level nets.Default= 4
#-routeShielding VSS => shield nonleaf-level clk nets with net named VSS
#-routePreferredExtraSpace 3 => provide extra spacing of 3 tracks b/w clk and VSS, when routing nonleaf-level nets. Default=1

specifyClockTree -file ./scripts/func_clktree.ctstch => Loads  the  clock  tree specification file.
#scripts/func_clktree.ctstch: embed each clk between AutoCTSRootPin and END. Specify Period, MaxSkew, Buffer, ThroughPin.  ThroughPin is used for generated clks, so that skew requirements are maintained for generated clk to master clk. This helps in getting rid of hold violation between flops in master clk to flops in generated clks.
 Ex:
AutoCTSRootPin spi_clk => root pin spi_clk
Period         100ns => default=10ns
MaxDelay       10ns => max delay allowed from clock port of chip to any sink. default=10ns
MinDelay       0ns  => min delay allowed from clock port of chip to any sink. default=0ns
MaxSkew        2000ps => max skew between clk pins of any 2 flops. large value here implies fewer buffers will be injected in clk tree. 2ns allows only 1 or 2 level of clk tree to be built. hold delays if any will be fixed by adding buffers in data path (burns less power). It we put skew of 200ps, we'll get 4 or 5 levels of clk tree.
SinkMaxTran    600ps => max tranistion (rise/fall) allowed at sink
BufMaxTran     600ps => max tranistion (rise/fall) allowed at i/p of any clk tree buffer
Buffer         CTB02B CTB15B CTB201B CTB20B CTB25B CTB30B CTB35B CTB40B CTB45B CTB50B CTB55B CTB60B CTB65B CTB70B => buffer cells to use during automatic, gated CTS
NoGating       NO => trace through clock gating logic. default=NO. If "rising/falling" used => Stops tracing through a gate (including buffers and inverters) and treats the gate as a rising/falling-edge-triggered flip-flop clock pin
DetailReport   YES
ForceMaxTran   YES
#AddSpareFF DTB10 5 => add max of 5 spare DTB10 FF to lowest level of clock tree. i/p of FF are tied to 0 and o/p left floating. These can be used during ECO without disturbing the existing clk tree network.
#SetDPinAsSync  NO => treat Data pin of FF as sync/excluded (default=NO => treat it as excluded pin, i.e don't try to balance to it, YES => try to balance it if CTS is able to trace to it)
#SetIoPinAsSync NO => treat I/O pin as sync/excluded (default=NO => treat it as excluded pin, YES => try to balance it if CTS is able to trace to it)
RouteClkNet     Yes => runs globalDetailRoute on clk tree using nanoroute. (by default, "setCTSMode -routeClkNet true' is set inside clockDesign/ckSyntehsis, so globalDetailRoute is always run)
#PostOpt        YES => turns on opt => resizes buffers or inverters or gating components, refines placement, and corrects routing for signal and clock wires. default=YES.
#OptAddBuffer   NO => Controls whether CTS adds buffers during optimization.
#RouteType      specialRoute
#LeafRouteType  regularRoute
ThroughPin => traces thru the pin, even if pin is clk pin.
 + Iclk_rst_gen/clk_count_reg_1/CLK => div by 4 clk generated using this flop. Causes CTS to get clk thru this pin for balancing clk.
 + Iclk_rst_gen/clk_count_reg_2/CLK => div by 8 clk generated using this flop. Causes CTS to get clk thru this pin for balancing clk.
ExcludedPin => to exclude some pins for CTS purpose. CTS will not try to balance clk thru this pin.
END

#NOTE: when we use ThroughPin for clk pin, then that clk pin thru which we are doing through, is treated as excluded pin , and cts will not try to balance that clk pin with other clk leaf pins. It will actually try to balance the final leaf flops that are connected after going thru that clk pin (i.e connected to Q o/p of such a flop). DynamicMacroModel can be used to balance skew for such flops (see: encounter CTS documentation).
 
#CTB buffers are added as part of clk tre with suffix __L1_ (or L2,L3,etc). Apart from these,  __Exclude_0 (or 1,2,etc) buffers are added by the CTS engine (b/w driver and exclude pin) to exclude the pins specified in the clock tree specification file. This is needed so the driver(s) of the excluded pin(s) does not see a large load if they are located a significant distance apart. All clk pins of flops driven by rootclk are sync pins, and are balanced by CTS. If we use "ThroughPin", then clk pins of these flops driven by divided clk are also treated as sync pins for CTS and will be balanced. All other pins are treated as "exclude" pins, meaning they are async and CTS doesn't consider them when doing CTS. So, throughpins for clk above, will be treated as async and any buffers added to drive clk pins of these will be marked as __Exclude_. These exlude buffers as well as the flops connected to them don't show up in clock tree browser (in VDIO). They are not considered part of normal clk tree. So, total number of flops shown by CTS may not be some as total nummber of flops in CTS tree, due to "excluded" flops.
#To see list of all flops not in any clk tree, open clk tree browser from VDIO panel, and click on Tool->List->FF not in clk tree. Over here, apart from spare flops, we'll see all "throughpin" flops on which exclude buffers have been added, all spi flops and regfile flops.

#actual clk tree synthesis: cksynthesis resizes inv,buf and clk gating elements unless they have been marked as dont touch. Clock gating components consist of buffers, inverters, AND gates, OR gates, and any other logical element  (defined  in the library) that appears in the clock tree before CTS synthesis inserts any buffers or inverters. Then globalDetailRoute is run to route clk nets
#-forceReconvergent=> Forces CTS to synthesize a clock with self-reconvergence or clocks with crossover points. Without this option, CTS halts and issues errors. To synthesize clocks with crossover points, list such clocks together in the clock tree specification file.
ckSynthesis -report ./dbs/cts/clockt.report -forceReconvergent => Builds  clock trees, routes clock nets, and resizes instances, depending on the parameters you specify.  These routes/placement are not touched again during signal routing.
#clockDesign -specFile Clock.ctstch -outDir ./dbs/cts => optional: this 1 cmd replaces 2 cmds above (specifyClockTree and ckSynthesis. These 2 cmds are called in background). It provides clock_report in ./dbs/cts/clock.report

#CTS reports: On screen, first we see res/cap tables being read for all views (MAX/MIN), then it reads clktree spec file, then it runs ckSynthesis. It does various checks for clk pins, then it builds clk tree, shows subtree 0 (tree from clk i/p port), subtree 1 (tree from first driving gate of clk) and so on. It tries to satisfy the constraints in clktree spec file across all active views. It then does routing and again tries to satisfy all constraints.
#In CTS report, we'll see many subtrees, each of which corresponds to bunch of flops driven directly by the driver.
Ex: on the main screen, we see reports like this for one of the subtrees:
SubTree No: 5 => represents that it is subtree 5, and has all flops driven by driver shown below
Input_Pin:  (Iclk_rst_gen/clk_gate_reg/latch/CLK) => i/p pin of driver
Output_Pin: (Iclk_rst_gen/clk_gate_reg/latch/GCLK) => o/p pin of driver
Output_Net: (Iclk_rst_gen/n27) => net name of clk that is driving bunch of flops on this clk tree.
*** Find 2 Excluded Nodes. => there are 2 excluded nodes on this clktree, which aren't going to be part of CTS.
**** CK_START: TopDown Tree Construction for Iclk_rst_gen/n27 (5-leaf) (1 macro model) (mem=491.2M) => no. of leaf elements is 5, this includes flops as well as buf/clk-gaters for another subtree.
Total 2 topdown clustering.
Trig. Edge Skew=725[532,1257] N5 B0 G2 A0(0.0) L[1,1] score=900 cpu=0:00:00.0 mem=491M
**** CK_END: TopDown Tree Construction for Iclk_rst_gen/n27 (cpu=0:00:00.0, real=0:00:00.0, mem=491.2M)

#set_interactive_constraint_modes {<list_of_constraint_modes>} => Puts  the  software  into  interactive  constraint entry mode for the specified multi-mode multi-corner constraint mode objects. Any timing constraints that you specify after this command take effect  immediately  on  all  active  analysis views that are associated with the specified constraint modes. The  software  stays in interactive mode until you exit by specifying the set_interactive_constraint_modes command with an empty list: set_interactive_constraint_modes { }

set_case_analysis 0 scan_mode_in => we exit out of scan mode back to normal func mode

# Check and save design after clocktree insertion
checkPlace ./dbs/cts/check_place.rpt
checkDesign -noHtml -all -outfile ./dbs/cts/check_design_cts.rpt
saveDesign ./dbs/cts/cts.enc

# Add set_propagated_clock by entering interactive mode
set_interactive_constraint_modes [all_constraint_modes -active]
set_propagated_clock [all_clocks] => propagates delay along clk n/w (accurate only after CTS) from clk source to reg clk pin. We can also specify clk src latency (latency from external src to clk port) using set_clock_latency. Total latency is sum of clk src latency and propagated delay. To specify uncertainty for external src latency, use -early or -late, and tools choose worst one for setup/hold. To specify internal uncertainty (for skew or variation in the successive edges of clk wrt exact clk), use set_clock_uncertainty.
set_interactive_constraint_modes { }

# Timing Analysis after CTS before optimization
timeDesign -postCTS -prefix digtop_post_cts
timeDesign -postCTS -hold -prefix digtop_post_cts -numPaths 50

# Post CTS optimization
#setOptMode -effort high
#setOptMode -simplifyNetlist false
#setOptMode -fixCap true -fixTran true -fixFanoutLoad false => this says which drv viol need to be fixed (usually FO fix not needed)
#-postCTS repairs design rule violations and setup violations after clk tree has been built. -hold will fix hold violations also. -incr performs setup opt if needed further.
optDesign -postCTS -prefix digtop_post_cts_opt => fixes drv and setup viol only (if -hold added here, then it fixes hold viol only. if -drv, then it fixes drv viol only)
optDesign -postCTS -hold -prefix digtop_post_cts_opt => fixes hold viol only.

IMP: all viol should be fixed by now, as from here on, no gates can be added. So, only minor viol related to routing can be fixed. If any gross viol remains by now, it will never be fixed post CTS.

# Check and save design after clocktree insertion post optimization
#checkDesign -all -outfile ./dbs/cts/check_design_cts_opt.rpt
#saveDesign ./dbs/cts/cts_opt.enc

# Timing analysis after optimization
#timeDesign -postCTS -prefix digtop_post_cts_opt
#timeDesign -postCTS -hold -prefix digtop_post_cts_opt

# Save netlist post CTS optimization
saveNetlist ./netlist/digtop_post_cts_opt.v

Route => runs nanoroute to route it, cmd is routeDesign
------
# Import Post CTS design => file from posS step above
#source ./dbs/cts/cts_opt.enc

#setting SI (noise)driven and Timing driven  to true, enables SMART algo (Abbreviation for Signal integrity, Manufacturing Awareness, Routability, and Timing). by default, nanoroute takes into account both timing and SI while routing. If timing driven is set to false, it uses an older algo. Use the options -routeSiEffort and -routeTdrEffort to adjust the effort level for SI and Timing Driven routing, respectively. These options fine-tune the priorities the router assigns to timing, signal integrity, and congestion. All these options can be selected using gui: route->nanorute->route.

setNanoRouteMode -routeWithTimingDriven true => minimize timing violation by causing most crit nets to be routed first.
#setNanoRouteMode -routeTdrEffort 0:10 =>effort level with tdr(timing driven route). 0=>congestion driven while 9=>timing driven
setNanoRouteMode -routeWithSiDriven true =>  minimize crosstalk violation by wire spacing, layer hopping, net ordering and minimizing the use of long parallel wires.
#setNanoRouteMode -routeSiEffort {high | medium | low } => default is high when timing driven is set to true else default is low. set to high for congested designs (since congested designs have SI problems), low for non congested.

#specify top and bottom routing layers (by default bot/top routing layers are ones specified in tech lef file).
setNanoRouteMode -quiet -routeBottomRoutingLayer default => specifies lowest layer nanoroute uses for routing. Layers can be specified using the LEF layer names or layer ID numbers. default is lowest layer specified in lef file. range is 1-15 => 1 means metal1 and so on. If POLY is defined as routing layer in tech lef file, then POLY is assigned layer id 1, METAL1 is layer id 2 and so on.
setNanoRouteMode -quiet -routeTopRoutingLayer default => Specifies the highest layer the router uses for routing. default is the highest layer specified in lef file. range is 1-15.

#specify iterations for nanoroute. nanoroute first does global route, then starts detail routing from iteration 0 to 20(max) in steps. Iterations after 0 do not run routing. Instead, they run search and repair. Iterations after 20 run post route opt. start and end iterations are set by default to 0.
setNanoRouteMode -drouteStartIteration default => Specifies the first pass in a detailed routing step.
setNanoRouteMode -drouteEndIteration default => Specifies the last pass in a detailed routing step. set to default (which implies run post route opt).  If set to some number, antenn violations will not get fixed

#Pitch/Mgrid options
#setNanoRouteMode -quiet -drouteOnGridOnly none|via|all => we use this option to control off-grid (off-track) routing. Note: grid means track in nanoroute which is Metal1 pitch. 3 options:
 all =>no off grid routing of vias and wires, none =>no off grid routing of wires, via =>no off grid routing of vias  
#OBSELETE: setNanoRouteMode -drouteHonorMinFeature true => This is to honor Manufacturing Grid. this is set by default to true if MANUFACTURINGGRID is set in tech lef file. In future releases, this not needed as nanoroute is always going to honour MGrid.

#antenna violation options
#setNanoRouteMode routeIgnoreAntennaTopCellPin =>Ignores antenna violations on top-level I/O block pins, but repairs antenna violations elsewhere. default is true, so no need to set it.

#antenna violations can be fixed by 2 ways: 1. layer hopping 2. Antenna diode insertion.
1. layer hopping:
setNanoRouteMode -drouteFixAntenna True => This can be used when antenna viol are the only violations, and we want to just fix these. Do a "setNanoRouteMode -reset" before running this
2. Antenna diode insertion:
setNanoRouteMode -routeInsertAntennaDiode true => nanoroute searches in LEF for cells of type ANTENNACELL specified in the LEF MACRO statement. These cells will be used for diode insertion, provided diffusion area is specified for the antenna cell ( ANTENNADIFFAREA ) so Nanoroute understands that adding this cell to the net will reduce the process antenna effects for the gates connected to it. First Nanoroute will use layer hopping, and if violations still remain, it will do diode insertion)
#NOTE: Antenna diodes are not inserted for ECO routing. Also by default, antenna diodes are not inserted on clock nets, since clock nets are don't touch (and also clk nets are routed first, so lot of flexibility in layer hopping allows all antenna viol to be fixed). To have antenna diodes on clock nets, use:
#setNanoRouteMode -routeInsertDiodeForClockNets true

#setNanoRouteMode -reset => resets all setNanoRouteMode parameters to their default values
#getNanoRouteMode => displays everything that's set for nanoroute. good for sanity check.

#Nanoroute
routeDesign -globalDetail => (equiv to "globalDetailRoute") runs global and detailed routing (by default). It's timing and SI driven by default, but we can set both of these false.
#global routing is initial pahse, where tool plans global interconnect and prduces a congestion map. During this phase, NanoRoute breaks the design into rectangles called global routing cells (gcells). It finds connections for the regular nets defined in the NETS section of the DEF file by assigning them to the gcells. The goals of global routing are to distribute and minimize congestion and to minimize the number of gcells that have more nets assigned than routing resources available.
#detail routing is when NanoRoute builds the detailed routing database. Then it routes the wires that connect the pins to their corresponding nets, following the global routing plan. During the search-and-repair phase of detailed routing, NanoRoute repairs design rule violations.The primary goal of detailed routing is to complete the interconnect without creating shorts or spacing violations. Tech lef file has all DRC rules (which are mostly spacing rules for vias and metal lines).
#from VDI gui, we can run routing using route->nanoroute->route. choose timing driven and set scale to 5. (for congestion driven, set scale to 0)

#to add antenna diodes manually to internal nets which still have violations, after routing is done, use this:
attachDiode -prefix <custom_diode> -diodeCell <diodeCellName> -pin <instName> <termName> => adds antenna diode to named pin of named inst.
ex: attachDiode -prefix custom_diode_input -diodeCell AP001 -pin inst1/reg_gater_4 PREZ => adds Diode named AP001 to pin named term2 of inst1 residing in inst2 present in top level module. Names it with prefix "custom_diode_input" so that it's easier to recognize it.

# Check and Save design after route
checkPlace ./dbs/route/check_place.rpt
checkDesign -all -outfile ./dbs/route/check_design_route.rpt -noHtml
saveDesign ./dbs/route/route.enc

# Remove any interactive constraints entered before (set_propogated to respecify)
update_constraint_mode -name functional -sdc_files \
    [list /db/BOLT/design1p0/HDL/Autoroute/digtop/Files/input/bolt_constraints.sdc \
          /db/BOLT/design1p0/HDL/Synopsys/digtop/tcl/clocks.tcl \
          /db/Hawkeye/design1p0/HDL/Synthesis/digtop/tcl/case_analysis.tcl \
          /db/BOLT/design1p0/HDL/Synopsys/digtop/tcl/false_paths.tcl \
          /db/BOLT/design1p0/HDL/Synopsys/digtop/tcl/constraints.tcl \
          /db/BOLT/design1p0/HDL/Synopsys/digtop/tcl/multicycle_paths.tcl \
          ./scripts/case_analysis.sdc \
          ./scripts/pre_place_constraints.sdc]

# Add set_propagated_clock by entering interactive mode
set_interactive_constraint_modes [all_constraint_modes -active]
set_propagated_clock [all_clocks]
set_interactive_constraint_modes { }

# extractRC options - PostRoute, Non-Coupled, Native Extractor (low effort). Here, we have switched from cap table lookup (*.capTbl in vdio dir in pdk) to RC extractor for more accurate delays.
#setExtractRCMode: sets rc extraction mode fo extractRC cmd.
setExtractRCMode -reset => all setExtractRCMode parameters are reset to default value.
setExtractRCMode -engine postRoute => postroute uses postroute engine where RC extraction is done by detailed measurement of distance b/w wires, and coup cap is reported. preroute uses preroute engine where RC extraction is done by fast density measurement of surrounding wires, and coup cap is not reported. use option -engine postroute with -effortLevel <high or signoff> to achieve greatest accuracy.
setExtractRCMode -coupled false => false implies coupling cap to be grounded, typically used for STA. For SI analysis, this should be set to true so that coupling cap is o/p separately than gnd cap.

setExtractRCMode -effortLevel low => low invokes native extraction engine (lowest accuracy), medium invokes TQRC (Turbo QRC), high invokes IQRC (Integrated QRC), while signoff invokes standalone QRC (highest accuracy). Version of QRC to be used is fixed for a particular Encounter version, but we can change it by specifying it in .amerc file in vdio dir as follows:
.amerc: ext-10.1.2_HF1 => add this line for extractRC to pick up this version of RC extractor

#setExtractRCMode [-total_c_th, -relative_c_th, -coupling_c_th] <value> => there are 3 separate parameters: total_c_th, relative_c_th, coupling_c_th. These determine the threshold of when will the coupling cap of nets be grouded. We don't set this option in our flow, as default values based on process node (using setDesignMode -process command) takes care of it.
#setExtractRCMode -total_c_th <cap> => If total cap for nets < total_c_th, coupling cap is grounded (default=5ff but adjusted based on process node).
#setExtractRCMode -coupling_c_th <cap> => If coupling cap (NOT total cap) for nets < coupling_c_th, coupling cap is grounded (default=3ff but adjusted based on process node),
#setExtractRCMode -relative_c_th <ratio> => If the total coupling cap b/w  a pair of nets is less than the percentage (specified with this parameter) of the total cap of the net with the smaller total cap in the pair, the coupling cap b/w these two nets will be grounded (default=0.03).

#setExtractRCMode -capFilterMode  <relOnly | relAndCoup | relOrCoup> => this option is used only when -coupled is set to true above. default is relAndCoup for process node below 130nm, else default is relOnly. process node is set using setDesignMode -process command.
#any setting => if net's cap < total_c_th, then coupling cap grounded regardless of the -capFilterMode setting.
#relOnly => if net's coupling cap < relative_c_th, then coupling cap grounded.
#relAndCoup => if net's coupling cap < relative_c_th and coupling_c_th, then coupling cap grounded. most restrictive.
#relOrCoup => if net's coupling cap < relative_c_th or coupling_c_th, then coupling cap grounded.

#setDesignMode    -process 150 => implies process is 150nm and above. We adjust this based on what nm process we are using so that the tool automatically adjusts coupling cap thresholds. For 150nm, total_c_th=5, relative_c_th=0.03 and coupling_c_th=3. For lower nm tech, coupling cap threshold raised (i.e, any coupling cap below a certain value is kept as coupling instead of lumping to gnd)

#extractRC => not needed to run, since "setExtractRCMode" automatically invokes extractRC when timedesign is run below.

#instead of using setExtractRCMode, we can also use setDelayCalMode
#setDelayCalMode -engine Aae -SIAware false

# Timing Analysis after route
timeDesign -postRoute -prefix digtop_post_route
timeDesign -postRoute -hold -prefix digtop_post_route -numPaths 140

# Post-route optimization => we need this, since routing may have introduced some hold and drv violations.
setOptMode -effort high
setOptMode -maxDensity 0.98 =>Specifies the maximum value for area utilization. optdesign does not grow the netlist above this value.
setOptMode -holdTargetSlack 0.1 -setupTargetSlack 0.05
setOptMode -simplifyNetlist false
#-postRoute repairs design rule violations and setup violations after routing is done. -hold will fix hold too. usually need to fix hold and drv
#optDesign -postRoute -prefix digtop_post_route_opt => to fix setup and drv
optDesign -postRoute -hold -prefix digtop_post_route_opt => fix hold
optDesign -postRoute -drv -prefix digtop_post_route_opt => fix drv

# Timing Analysis after route opt
timeDesign -postRoute -prefix digtop_post_route_opt
timeDesign -postRoute -hold -prefix digtop_post_route_opt

# Check and Save design after optimization
checkPlace ./dbs/route/check_place_opt.rpt
checkDesign -all -outfile ./dbs/route/check_design_route_opt.rpt
saveDesign ./dbs/route/route_opt.enc

# Save netlist post-route optimization
saveNetlist ./netlist/digtop_post_route_opt.v

STA: Here we run timing in all modes
----
set_analysis_view -setup {func_max func_min scan_max scan_min} -hold {func_max func_min scan_max scan_min} => imp to run timing in all modes as there might be paths in setup/hold in other views which may show up in PT, but may never get opt in VDIO. i.e there may be hold paths in func_max and setup paths in func_min which will need to be fixed here.

Run timing, Repeat opt step if necessary as in route step, rerun timing, then check and save.
#-postRoute repairs design rule violations and setup violations after routing is done. -hold will fix hold too. usually need to fix hold and drv after running STA, since some paths might start failing since we have enabled timing for SCAN mode also, so new [paths may pop up.
#optDesign -postRoute -prefix digtop_post_route_sta_opt

Then check area:

set dbgSitesPerGate 5 => /db/pdk/lbc7/.../lef/msl270_lbc7_core_2pin.lef  leffile defines coresite size at the top of lef file. This coresite shows the min x dimension that any gate can have. It's basically M2 pitch, as we allow gate widths to be in multiple of M2 pitch. We take x dimension of  nand2 x1 (NA210) gates, which is usually 4 or 5 times of this M2 pitch and set dbgSitesPerGate to that number. For this case, CORESITE size is 0.9x11.0, while NA210 has size 4.5x11.0, so dbgSitesPerGate is 4.5/0.9 = 5. This number is very important and changes with process tech. For LBC8, it's 6.8/1.7 = 4.

#If you look at layout of NA210 in lbc7_2pin, it's 4.5x11um (it's in um) with Lmin=0.4um (400nm). It has 3 metal1 lines (min W=0.3um) and 2 poly lines running vertically. So, width and spacing of these 5 lines, sets the x dimension of the cell. On contrast, IV110 has area of 3.6x11um. It has 2 metal1 lines and 1 poly line. Reason for such a large area is to leave space for routing

#gatecount of imported design, and what VDI has currently (we can also use cmd "checkFPlan -util" or "checkPlace" instead of "reportGateCount" to see current design's stats. reportGateCount should be used instead of reportDesignUtil as it's supported cmd)
reportGateCount -level 5 -outfile gatecount_sta.rpt => gives size of the imported design in terms of gatecount. Physical cells (as FILLER, etc) are not reported in this. -stdCellOnly reports stdcells only (no IP_blocks / IO cells reported). -module <modulename> reports gate count for named module. -level reports gate counts for sub-hier upto that level deep. So very useful to see where size increase is coming from.
For gatecount, this is the formula used by VDI: gateCount = moduleArea / gateSize, where
moduleArea  is the area of the module (sum of the areas of all instances inside of the module, including standard cells, blocks, and I/O cells,
gateSize = dbgStdCellHgt x dbgDBUPerIGU x dbgSitesPerGate
dbgStdCellHgt is the standard cell row height, dbgDBUPerIGU is the M2 layer pitch, dbgSitesPerGate is a user-defined global variable that determines the gate size the software assumes when calculating the gate count. For example, the default value of 3 means the assumed gate size is equal to 1 standard cell row height and 3 M2 layer pitch widths. So, gatesize is basically in terms of M2 pitch, so we set "dbgSitesPerGate" parameter above to get gatesize in terms of NA210 size.

for our case, gatesize = 11.0x0.9x5=49.5um^2 (size of a nd2x1 gate). Note: we got M2 pitch by looking in vdio/lef/msl270_lbc7_tech_2layer.lef. To confirm, area of nd2x1 gate, we can also look in vdio/lef/*_2pin.lef and get exact X-Y dimension of nd2 gate)
So, this reports total no. of gates in terms of equiv nd2x1 (NA210) gates. It reports total cell area (area occupied by module), and total gates =  total_area/nd2x1_area = 106584/49.5 = 2153 gates. It also reports total no of cells placed (cells mean instances of stdcells, i.e flop is 1 cell). It also gives density which is calc as area occupied by cells divided by the total area of the block.

#checkPlace => reports placement density and no. of placed and unplaced instances.

#NOTE: DC report_area gives area by looking at area field in .lib file (in synopsys/src dir) for each cell.  For our case it's in terms of nd2x1 gate, since NA210 is assigned an area of 1, and all other stdcells have an area relative to this. So, if it says "Total cell area: 1750" => total area is 1750 nd2x1 gates or 1750*49.5 um^2 = 86625 um^2. to compare gate area, we just compare DC cell area with VDI Gates count (both of which are in terms of nd2x1). This shows us what are the extra no. of gates added post route.

SIGNOFF
---------
#set extract rc mode to signoff, extract RC.
setExtractRCMode -effortLevel    signoff => signoff used to be more accurate
setExtractRCMode -coupled        true => coupling cap to be kept
setExtractRCMode -capFilterMode  relAndCoup

setDesignMode    -process 150 => implies process is 150nm and above.

extractRC => Extracts  resistance  and  capacitance  for  the interconnects and stores the results in an RC database. done after routing

#time design
timeDesign -signoff -reportOnly       -prefix digtop_post_route_signoff
timeDesign -signoff -reportOnly -hold -prefix digtop_post_route_signoff

SIGNAL INTEGRITY
----------------
Cadence CeltIC is signal integrity analyzer in Encounter platform. It performs noise analysis  (impact of noise on both delay and functionality) and generates repairs back into PnR. Noise lib (.cDB) are created for efficiently characterizing cells

setExtractRCMode -reset
setExtractRCMode -engine         postRoute
setExtractRCMode -effortLevel    signoff
setExtractRCMode -coupled        true
setExtractRCMode -lefTechFileMap ./scripts/qrc_layer_map.ccl
setExtractRCMode -capFilterMode  relAndCoup

setDesignMode    -process 150

#set view, propagate clk and set ocv/cprr as during route.
set_analysis_view -setup {func_max func_min scan_max scan_min} -hold {func_max func_min scan_max scan_min}

set_interactive_constraint_modes [all_constraint_modes -active]
set_propagated_clock [all_clocks]
set_interactive_constraint_modes { }

setAnalysisMode -analysisType onChipVariation -cppr both

#delay calc mode: used when optimizing design
setDelayCalMode -engine signalStorm -signoff true => for signoff, use signalstorm delay calculator. default is feDc which is EDI delay calculator. -signoff enables signoff quality (highest accuracy) delay calc mode.

#set SI mode
setSIMode       -reset => resets all param to default
setSIMode       -analysisType default -acceptableWNS same => analysis type resets parameters to default or pessimistic settings. acceptableWNS Specifies the worst negative slack (WNS) that is acceptable for the design. same means keep slack same as before SI, usually 0. Or we can provide the WNS value.

setSIMode -insCeltICPreTcl  { source scripts/pre_celtic.tcl } => Changes the default environment variable values to the specified values. sets these parameters. message_handler -set_msg_level ALL; message_handler -set_msg_level ALL
setSIMode -insCeltICPostTcl { source scripts/post_celtic.tcl} => Executes the specified CeltIC NDC commands after the SI analysis engine performs noise analysis. It runs these cmd: generate_clock_report -reverse_slope_limit -1 -nworst 10 -file timingReports/clock_report.rpt, generate_report -txtfile timingReports/noise.rpt.

#timeDesign Runs trial route, extraction and timing analysis. also generates detailed timing reports. -signoff calls QRC for extraction. -si generates glitch violation report and incremental sdf (backannotates an incr.sdf) to calc WNS due to noise. runs SI timing in MMMC mode (all active views), and shows worst case timing.
timeDesign -signoff -si -prefix si_setup
timeDesign -signoff -si -hold -prefix si_hold

# Report Timing including incremental delays for setup/hold. This reports timing for all analysis views that are in effect at this point using "set_analysis_view" cmd (which is FUNC_MAX/MIN, SCAN_MAX/MIN for both setup/hold)
#set_analysis_view -setup {func_max func_min scan_max scan_min} -hold {func_max func_min scan_max scan_min} => change analysis view if you need timing only for a particular view i.e FUNC_MAX.

setAnalysisMode -checkType setup => default is setup.
report_timing -nworst 1 -max_points 500 -check_type setup -net -path_type full_clock -format {instance arc cell slew delay incr_delay arrival required} > timingReports/report_timing_setup.rpt

setAnalysisMode -checkType hold
report_timing -nworst 1 -max_points 500  -check_type hold -net -path_type full_clock -format  {instance arc cell slew delay incr_delay arrival required} > timingReports/report_timing_hold.rpt

reportDelayCalculation -from Itimergen/U185/Y -to Itimergen/U1925/A1

#fixing SI.
setOptMode -effort high
setOptMode -maxDensity 0.98
setOptMode -usefulSkew false
setOptMode -holdTargetSlack 0.1 -setupTargetSlack 0.05

#optDesign: -postRoute fixes both setup(incr) and drv if nothing specified. -hold fixes hold violations also. -si corrects glitch and setup violations caused by incremental delays due to coupling cap. -si can only be used with -postroute.
#optDesign -signoff -postRoute -si
optDesign -signoff -postRoute -hold -si -incr

#NOTE: after optdesign finishes it shows setup/hold slack without SI. When we run timeDesign or report_timing, it shows slack with SI. So, the slack with SI will always be lower than what optDesign reports.

timeDesign -signoff -si       -prefix si_setup_opt
timeDesign -signoff -si -hold -prefix si_hold_opt

#setup
setAnalysisMode -checkType setup => mode has to be "setup" or else report_timing won't report timing for setup. default is setup, so this cmd not needed.
report_timing -nworst 1 -max_points 500 -check_type setup -net -path_type full -format {instance arc cell slew delay incr_delay arrival required} > timingReports/report_timing_si_opt_setup.rpt
#hold
setAnalysisMode -checkType hold => mode has to be "hold" or else report_timing won't report timing for hold.
report_timing -nworst 1 -max_points 500  -check_type hold -net -path_type full -format  {instance arc cell slew delay incr_delay arrival required} > timingReports/report_timing_si_opt_hold.rpt

setDelayCalMode -considerMillerEffect true
setUseElmoreDelayLimit 300
set_global timing_cppr_self_loop_mode true
set_global timing_disable_bidi_output_timing_checks false
set soceSupportWireLoadModel 1

checkPlace ./dbs/si_fix/check_place.rpt
checkDesign -all -noHtml -outfile ./dbs/si_fix/check_design_sta.rpt
saveDesign ./dbs/si_fix/si_fix.enc


FILLER => add filler cells. Fillers maintain continuity of VDD/VSS and of NWELL/PWELL. after running filler, placement desnisty will goto 100%, so you cant place anything more. Go back to post route step, to do any opt.
#NOTE: filler cells are not defined in .lib file, as they don't have any function or timing. So, when we add filler cells, these don't get saved in verilog netlist (as only the cells in .lib are used for verilog netlist), but are saved in the def file.
-------
There are 2 filler cell flow:
1. Normal filler cells: Here, filler cells are just poly.
addFiller -cell SPAREFILL1 SPAREFILL2 FILLER_DECAP_P6 -prefix FILLER_NORMAL => -prefix adds a prefix to all these cells so it's easy to identify this.

2. ECO filler cells: Here, filler cells are eco cells (gate array cells) which can be converted to any desired gate by just altering metal layers (they require extra CONT mask too). We fill with ECO filler cells and then with normal filler cells.
addFiller -cell  FILLER5LL FILLER10LL FILLER15LL FILLER20LL FILLER25LL FILLER30LL FILLER40LL FILLER50LL FILLER55LL -prefix FILLER_ECO => ECO cells added first so that we can add as many of these cells. ECO cells width are multiple of X-grid, so there may be single grid gaps in design after placing ECO cells which can be filled by normal filler cells.
addFiller -cell  SPAREFILL1LL SPAREFILL2LL SPAREFILL4LL SPAREMOSCAP3LL SPAREMOSCAP4LL SPAREMOSCAP8LL -prefix FILLER_NORMAL => normal cells added later so that any remaining space not filled by ECO cells will be filled with these normal filler cells.
#To see ECO filler cells only on the gui: do
selectInst *FILLER_ECO* => selects all filler eco on gui, to help us see if they are unifrmly placed.
#To find total num of FILLER cells used, goto Tools->DesignBrowser. On new window do find=Instance and then search for *FILLER_50* => This will show all filler which are FILLER50. Select all of them from list below (by using left mouse) and they will be highlighed on gui. We can also count the num of fillers this way to see how many of them are there for ECO purpose. Repeat for other filler cells. Filler cells are numbered sequentially for ECO fillers and NORMAL fillers, so easy to count them. These filler cells cannot be counted by using any script, as these filler cells don't exist in verilog netlist of enc database (enc.dat).
checkFiller => reports any gaps found inside the core area where there are no filler cells. shows up on gui on all such missing places. Make sure these gaps are OK
 
# Check and save design
checkPlace ./dbs/filler/check_place.rpt
checkDesign -all -noHtml -outfile ./dbs/filler/check_design_filler.rpt
saveDesign ./dbs/filler/filler.enc

Final Check => does final checks
-------------
# Start Clean
freeDesign

###############################################
# Import post sta design
source ./dbs/filler/filler.enc

###############################################
# Verify Connectivity/Geometry/Antenna

#verifyConnectivity => Detects  conditions such as opens, unconnected wires (geometric antennas), unconnected pins, loops, partial routing, and unrouted nets; verify connectivity can also be chosen from Gui thru top panel: Verify-> Verify connectivity. Choose Net type to "all" to check all types of nets (regular/special) or "Regular only" to exclude special nets as PG nets (-noSoftPGConnect also disables checking of soft Power/Ground connects). -geomConnect uses geometric model instead of centerline model so that if the wires overlap at any point, they are considered to be connected, they do not have to connect at the center line. For check types, click appr box. Provide name/path for conn rpt.
verifyConnectivity -type all -error 1000 -warning 50 -report ./dbs/final_check/connectivity.rpt => checks for all net types, all nets and all default checks.

#verifyGeometry => checks for width, spacing, shorts, off routing/manufacturing grid, via enclosure, min cut, and internal geometry of objects and the wiring between them. Many options can be added on cmd line or using gui. -allowRoutingBlkgPinOverlap allow routing obstructions to overlap pins.
verifyGeometry -allowRoutingBlkgPinOverlap -report  ./dbs/final_check/geomtry.rpt

#verifyProcessAntenna => Verifies process antenna effect (PAE) and maximum floating area violations. -pgnet checks tie-high and tie-low nets also for AE. -noIOPinDefault specifies that ANTENNAINPUTGATEAREA, ANTENNAINOUTDIFFAREA, ANTENNAOUTPUTDIFFAREA keywords from lef file are not applied to IO pins. These options can be chosen from GUI too.
verifyProcessAntenna -error 1000 -reportfile ./dbs/final_check/antenna.rpt -leffile ./dbs/antenna.lef

#check for max_cap/max_tran/max_fanout violations
reportTranViolation => reports transition vio on all nets (>4ns or limit specified in .lib file)
reportCapViolation => reports cap vio on all nets (>150ff or limit specified in .lib file)
reportFanoutViolation

#optional: additional checks
verifyPowerVia
checkTieHiLowTerm
checkAssignStatement
checkPhyInst
checkFloatingInput
checkFeedbackLoop
checkSpareCell
checkNetCollision
checkLECDir

summaryReport -noHtml -outfile summaryReport.rpt -outdir ./dbs/final_check => reports stats for entire design.

Look in dbs/final_check/*.rpt for conn,ant,geom violations, and summaryReport.rpt for all other report. Also look in checkPlacement.rpt and checknetlist.rpt.
 
Export => exports design
-------
Need to give .def file (for place and route info) and .v file (for running simulation on top level). Also, need to give spef file to digital simulation team (for cap,res, other extracted parameters to run gate level simulation with these parasitics back annotated).

SPEF: standard parasitic exchange format. part of "IEEE 1481-1998" std for IC delay and Power calculation system. Part of Open Verilog International's delay-calculation-system (DCS) standard. Based primarily on SPF (std parasitic format [includes DSPF and RSPF], useful in Spice sims), SPEF has extended capability and a smaller format. represents parasitic data of wires in a chip in ASCII format for parasistic parameter R (ohm), C (farad) and L(henry) for RC (or RLC) timing modeling. Used after layout to back-annotate timing for STA & simulation

SDF: standard delay format. while spef contains actual RLC values, these are annotated in STA tools (like PT) and wire delays calculated. These wire delays (from spef file) along with cell delays  (from liberty files used during synthesis) are then put in sdf file (no info abt RC here), which can then be used by STA tools to generate timing. RC extraction tools generate spef file, while STA tools use this to generate SDF file.

#export native or /and QRC coupled min/max spef file (native is crude extractor using cap look up table, while QRC is assura extractor which solves maxwell's 3D).
NOTE: extractRC has been run many times previously, but we never generated spef files. So, we run it again to make sure we get clean extraction. All extract settings remain in effect unless overwritten here.
//native
setExtractRCMode -effortLevel    low => invokes native extractor
extractRC
rcOut -rc_corner max_rc -spef ./dbs/final_files/digtop_native_max_coupled.spef
rcOut -rc_corner min_rc -spef ./dbs/final_files/digtop_native_min_coupled.spef

//qrc
setExtractRCMode -reset
setExtractRCMode -engine         postRoute
setExtractRCMode -effortLevel    signoff => invokes highest accuracy qrc extractor
setExtractRCMode -coupled        true => if set to false, coupling caps are lumped to gnd.
setExtractRCMode -lefTechFileMap ./scripts/qrc_layer_map.ccl
setExtractRCMode -capFilterMode  relAndCoup
setDesignMode    -process 150 => Based on process node specified (here it's 150nm), various coupling thresholds are chosen.

extractRC

rcOut -rc_corner max_rc -spef ./dbs/final_files/digtop_qrc_max_coupled.spef
#delayCal -sdf ../output/digtop_max.sdf => to gen max sdf from QRC extractor
rcOut -rc_corner min_rc -spef ./dbs/final_files/digtop_qrc_min_coupled.spef
#delayCal -sdf ../output/digtop_min.sdf => to gen min sdf from QRC extractor

# Export DEF
set dbgDefOutLefVias 1 => This ensures that all Vias (std, custome or using viarule) will be defined in def file itself. Vias are represented by patterns, so there is no problem of whether matching vias exist in pdk or not, when importing these into icfb. This is important, else these will be vias referencing other vias/via-rule which may not be present in pdk, causing import errors.
set dbgLefDefOutVersion 5.5 => If Def is set to 5.6 or 5.7, then viarule is still present in def file. If matching viarule is not there in pdk, then def import into icfb will cause errors. So, use def 5.5 to avoid this issue.
defOut -floorplan -netlist -routing ./dbs/final_files/digtop_final_route.def

# Export Netlist
#saveNetlist digtop_final.v  -includePhysicalCell {SPAREFILL1 SPAREFILL2 SPAREMOSCAP4 FILLER5} -excludeLeafCell -includePowerGround => This creates netlist which has VDD/VSS ports on all stdcells and module, and includes all physical cells specified (If no physical cells specified, then all filler cells included). netlist will have additional lines like "FILLER5 FILLER_INST_24 ();". Tool figures out physical cells based on "addFiller" cmd used previously, as there's no special property in Filler cells lef file to identify them as filler cells. "-excludeLeafCell" excludes leaf cell defn (i.e defn of AN210 etc) to be written to netlist
saveNetlist ./dbs/final_files/digtop_final_route.v => doesn't have VDD/VSS ports, nor any physical cells in it.

NOTE: final netlist above (netlist: digtop_final_route.v) has the format shown below.

1. First all modules in RTL are defined in terms of gate level components (structural netlist) with the same module name as in RTL. If the same module is called 4 times, then there will be 4 defn of this RTL module with 4 different names. This unifiqation is done, so that separate optimization can be done on each instance of such module.  
Ex: module module_name (i/o port defn) ... endmodule.
Note if scan test ports were added during dft step in synthesis, then the module is renamed as module_name_test_1

2. Then all such modules are instantiated in the top level module "digtop"(see bullet 5). The instance name is kept same as the defn name. However for modules with *_test_1 defn name, test_1 is dropped and RTL name is kept for instance names. signal_name to connect module_defn_pin_name are kept the same as in RTL as much as possible.
Ex: module_defn_name module_instance_name (.module_defn_pin_name(signal_name), ...)

3. For instances where dft was added, 3 new pins are added => test_si, test_so and test_se. For multiple test chains, we may see more than 1 si/so. i.e test_si1, test_si2, .. and test_so1, test_so2, ...etc. test_si connects to first scannable flop's SD pin, test_se connects to all scannable flop's S pin, Q pin of this flop connects to SD pin of next flop and so on forming a scan chain, and the final o/p pin is test_so pin which is just a buffered version of Q o/p pin of the last flop. Note there may be logic b/w Q o/p pin of the last flop and the PO pin of block, but scan chain connects just the o/p of flop to SD i/p of next flop.

4. a module spares is also there, which has all spare cells in it. spare modules don't have any i/o ports. If there were multiple spares then there would be multiple spares def as module spares_1 (..) endmodule, module spares_2 (...) endmodule, etc. Pins of spare gates are tied to 0/1. These 0/1 come from Tieoff gates (TO*) in spare module, which provide a zero o/p and one o/p. Sometimes o/p of these tieoff cells are buffered inside the module to provide signal to other cells, while other times different spare gates (inv,nd2,etc) are tied to different 0/1 from different TO* gates. NOTE: spare cells inside spare cell module have o/p pins omitted in their instantiation. Reason might be to avoid having floating o/p nets as the o/p pins of spare cells are not used anyway.

5. Top module "digtop" is defined at the end. It has buffers for i/p signals (BU*), for clk signals (CTB*), for o/p signals (BU*), tieOff (TO*). It instantiates all the other modules defined in module defn. Top level module has extra scan pins added: sdata_in and sdata_out.

# Export Gds (Do: source ./dbs/filler/filler.enc after opening encounter, before you do streamOut. Then filler.enc db is used for gds)
streamOut ./dbs/final_files/digtop_final_route.gdsii => Creates a GDSII Stream file version of the current database. By default, the Encounter software creates a Version 3 GDSII file.
#-libName <libname> Specifies the library to convert to GDSII format. Default: Name is DesignLib.
#Note: we can also use Gui: file->Save->GDS/OASIS

Report specific timing paths to match b/w PT/ETS and Encounter:
----------------------
set_analysis_view -setup {func_max} -hold {func_min} => change analysis view if you need timing only for other view.
setAnalysisMode -checkType hold => default checktype is setup.
report_timing -check_type hold -from u_DIG/flop1_r_reg -to u_dsp/sync1_reg -path_type full_clock => shows detailed clock path too.

OA design exchange process:
--------------------------
Instead of using defin for design exchange, we can directly write an OA database. In conventional flow, we take in floorplan def (pins def) and generate DEF or GDS. We use abstract LEF file for stdcells. In OA flow, we take in floorplan OA db directly and generate OA db. We use abstract OA db for stdcells. Abstract OA db doesn't have physical layout, just an abstract view. Steps:
1. Use encounter 9.1 or later. Add these to scipts/import.conf file in vdio dir(/db/NOZOMI_NEXT_OA/design1p0/HDL/Autoroute/digtop/vdio) :
 A. set rda_Input(ui_oa_oa2lefversion) {5.6}
 B. set rda_Input(ui_oa_reflib) "pml30_lbc8_2pin lbc8" => provide name of stdcell and tech lib
 C. set rda_Input(ui_oa_abstractname) {abstract}
 D. set rda_Input(ui_oa_layoutname) {layout}
2. For importing floorplan: In VDI gui, goto File->Load->OA Cellview. Provide library=HAYATE_dig1p0, cell=digtop, view=layout (or on cmd line: oaIn HAYATE_dig1p0 digtop layout). Not needed for our purpose, since we don't do floorplan import.
3. After going thru the flow, and running export_final.tcl, we are ready for OA db creation. In VDI gui, goto File->Save Design. choose data type=OA, library=HAYATE_dig1p0, cell=digtop, view=layout (or on cmd line: saveOaDesign HAYATE_dig1p0 digtop layout).

This creates a OA db in vdio dir. Where ever we are trying to save OA db, we need to have cds.lib file which needs to have these 5 lines:
SOFTINCLUDE $CIC_HOME/tools/dfII/local/cds.lib
DEFINE lbc8 /data/pdkoa/lbc8/2011.12.15/cdk/lbc8
DEFINE pml30_lbc8_2pin /data/pdkoa/lbc8/mcache/diglib/pml30/DIGLIB-PML30-RELEASE-r2.5.1_2_f/pml30_lbc8_2pin
DEFINE avTech /apps/artisan_cds/assura/3.2_EHF2_OA/tools/assura/etc/avtech/avTech
DEFINE HAYATE_dig1p0 HAYATE_dig1p0

In vdio dir, OA db is created under HAYATE_dig1p0 dir, which has "digtop" subdir, data.dm and tech.db files. "digtop" dir has "layout" dir which has layout.oa file, master.tag, digtop.conf and multiple other files. Make sure, digtop.conf file has same parameters as import.conf file. This dir structure is exactly the same as in "/db/NOZOMI_NEXT_OA/cds/HAYATE_dig1p0" which has digtop subdir, data.dm and tech.db files along with other subdir for schematic modules. "digtop" dir has "layout" dir (along with schematic and symbol dir) which has layout.oa file and master.tag in layout dir.

4. Now, we need to import this data in virtuoso. open icfb where we saved the OA library (/db/NOZOMI_NEXT_OA/design1p0/HDL/Autoroute/digtop/vdio). In lib mgr, we should see our "HAYATE_dig1p0" lib. Open digtop layout. We see that design is saved as OA abstract view, so we need to save it as layout view. To do that goto Tools-> Remaster Instances. Leave library and cell name empty. enter "search for" viewname as "abstract" and "update to" viewname as "layout". click OK, and the physical layout appears. Now, we can add pin labels the way we do it normally, and then save the design.

NOTE: this whole process is only for layout transfer (subtitute for Def import). We still have to do schematic/symbol transfer using Verilog import, exactly the way we used to do it normally. So, OA db process only saves us time of DefIn.

------------------------------------------------------

#Mask formats: (all these formats are hier formats). Files easily over 100GB in size. OPC done on gdsii and oasis files and 90% of mask data files are manipulated and refractured, and inspected before going into actual mask.
--------------------
GDSII (graphics database system 2): Now owned by Cadence. It's used for exchange of IC layout data and also given to Fab for IC fabrication. It consists of different layer patterns and shows all the different layer, with each layer number as layer 1, layer 2, etc. It doesn't know which layer is what as it's just showing patterns. In order to map these layers numbers to actual layers names in pdk, we need layer map file. This layer map file is pdk dir. For 1533eo35, it's in: /db/pdk/1533e035/current/cdk446/current/doc/stream.map. This has cds Layer name mapped to a gds layer number. For ex: layer 1 is mapped to NWELL, layer 2 to ACTIVE, etc. k2_viewer from cadence can be used to view gds files. see in cadence_virtuoso.txt for generating gds from layout.

OASIS (Open Artwork system Interchange standard for Photomasks) format: successor to GDSII. Owned by trade and std org  SEMI (Semiconductor Equipment and Materials International). Open std format to rep physical and mask layout data. It reduces the size of files by 10x. OASIS.MASK further reduces it by half. It allows the same datafile to be used for pattern generation, metrology and inspection.

MEBES format: Design layout files, in the form of either GDSII or Oasis data formats, are transferred to Mebes format for transmission to photomask shops. Mebes is a proprietary mask data format from Applied Materials Inc. It is regarded as the de facto industry standard for exchanging fractured photomask data. commonly used format for electron beam lithography and photomask production tools. Inspection tools inspect these files and perform MRC (manufacturing rule check) which is DRC-like check on post fractured data. Mebes files are generally much more data-heavy than either GDSII or Oasis formats because of the addition of resolution enhancement technique (RET) features and the need to provide essentially flat data--with a very limited amount of hierarchy--to e-beam photomask pattern generation tools.

LAFF format: seems like it's internal TI format. Look in eco.txt for more details.

-------------------------------------------------
=============================================

Done with all required steps. do si_check (for signal integrity, if needed) and si_signoff for final signoff checks.

**************************************************************************

---------------------------------------
Encounter Warnings and errors:
------------------------------------
A. reading .lib files during reading config file:
-----------------------------------------
Log:
**************
Reading max timing library '/db/pdk/lbc8/rev1/diglib/pml30/r2.5.0/synopsys/src/PML30_W_150_1.65_CORE.lib' ...

*WARN: (TECHLIB-436):  Attribute 'fanout_load' on output/inout pin 'CO' of cell 'AD210' is not defined in the library. Either define this value in the library or use set_default_timing_library to pick these values from a default library.
*************
Reason: fanout_load not present. deafult is set to 1.
-------------------------------

B. On running verifyGeometry or during nanoRoute:
----------------------------------
verifyGeometry: *WARN: (ENCVFG-47):    Pin of Cell mldd_env_thrsh_out_4_I_buf at (15.300, 1062.300), (32.300, 1066.100) on Layer MET1 is not connected to any net.
NanoRoute: #WARNING (NRDB-1005) Can not establish connection to PIN S at (558.900 206.100) on METAL1 for NET net1. The NET is considered partially routed.

These warnings say that a pin is  connected to some wire.  Usually, after issuing these warnings globalDetailRoute will complete the connection of the previously partially connected nets. As a summary, this warning shows that there might be some issue (mentioned above) but if the issue in the design is not real then globalDetailRoute will complete the connection of these partially connected nets. When we get it during verifyGeometry, check that location to make sure it's connected properly. Most of the time, it throws this warning for VDD/VSS pin of some cells.
Ex: during optDesign we see these warnings,  optDesign is free to move fixed instances around placement. But, fixed clock wires connected to their pins cannot be moved at this stage. That is the reason some pins are not connected due to instance movement by optDesign and resulting in this warning. Also, if driver driving o/p port is moved, then since port can't move, it results in this warning being issued. Nanoroute will try to fix it by adding extra routing during later stage.

For debugging it to see if the issue is real or is just a warning while doing nanoRoute, some verification can be performed as below :
1. checkPlace -checkpinAccess
2. verifyConnectivity
3. grid check (Sometimes the pins are not properly on grid)
4. Proper Layout connection.

------------------
C. **WARN: (ENCDB-2136):For instance 'IShootCtrl/g22236', its Input term 'A' does not connect to a 'TieLo' net. It is floating.
 
This happens when i/p pins get connected to 1'b1 or 1'b0. Router doesn't know what to connect it to, since they may be connected to one of the pwr grids or to tieoff cell o/p. This usually happens in 2 scenarios:
1. when an existing cell becomes a spare cell, because the i/p to that cell got connected to something else. In such case, i/p pin of this cell has no connection and hence tool connects it to 1'b1 or 1'b0.
2. Other scenario that it happens is when an existing cell o/p was driving i/p of some other cell, but then the eco change caused that cell to be used as a spare cell. So, now the i/p and o/p of that existing cell has diiferent connections. So, o/p of this existing cell can't be used to drive i/p of that other cell. So, tools connects it to 1'b1 or 1'b0.

Detailed soln at this link:
http://support.cadence.com/wps/myportal/cos/COSHome/viewsolution/!ut/p/a1/nY9NDoIwEEbPwgFMp1AoLOtPQGggKkbKxkBsTCMUguDC0wvGxJWaOLuZvHkzH8pRhnJd3NS56FWji2rqc-e4xiuCgwRC32MLYEA34T7CZkTtERAjAB-Kwa_9A8qfyBeDGE_Qt8Pl3APmR842oKkFCUV73XT1-Oxucp2kbLnSFyT6bpDT5NpUwxQnHupSdkhgTDwbU_IS28GSQAg4TOYmBRakxCcxx5CYf4vbOoOZqF3LVuWdGcYDE0SB8g!!/dl5/d5/L2dBISEvZ0FBIS9nQSEh/

we need to use this flow to fix the issue:

A. NON ECO design: Do it after placement as it's easy to add cells:
   1. restoreDesign
   2. placeDesign # Run placement before inserting tie high/low cells
   3. setTieHiLoMode -cell {TIEHI TIELO} # Specify tie high/low cells to use
   4. addTieHiLo # Insert the tie high/low cells. We need to add these cells as they are removed during placement in step 2 above. Appr Tiehi/Tielo cells will b inserted in every module that needs it and 1'b1 and 1'b0 will be connected to these.

B. ECO design (all layer): Add Tiehi/Tielo cells and then do eco Place/Route:
   1. addTieHiLo -cell "TIEHI TIELO"
   2. ecoPlace
   3. ecoRoute

C. ECO design (metal only): If the TIELHI/TIELO cells were already present in the netlist, route them using NanoRoute.
  1. selectNet <tielo_signal_o/p_from_tieoff_cell>
  2. setNanoRouteMode -routeSelectedNetOnly true
  3. detailRoute -select

D. If routing tiehi/tielo signals to the pin doesn't work, we can just connect any of the other pins to the floating pin. that way there's no extra routing (as pins are close together, so most of the times little bit of MET1 routing inside stdcell will suffice). This usually works for spare cells (or cells whose o/p is not used for functional purpose, so tying i/p pin to any signal will work). Steps to do this are as follows:
  1. attachTerm IShootCtrl/g21 B1 IShootCtrl/n513 => connect pin B1 of gate g21 to net n513 (which is connected to pin B2 of g21). This only connects logically, physical connection will be done later
  2. ecoRoute => actual routing done. ecoRoute cmd used to minimize any routing changes.

Run below cmds on any final design to make sure there are no 1'b1 or 1'b0 in netlist:
To ensure that all your tiehi/lo connections have tie cells (and are not connected to a rail instead), run the following dbGet commands:

  dbGet top.insts.instTerms.isTieHi 1
  dbGet top.insts.instTerms.isTieLo 1

The previous commands should return "0x0" if all connections have tie cells. If "1"s are returned, use the following commands to find the terms that still need a tie cell:

  dbGet [dbGet -p top.insts.instTerms.isTieHi 1].name
  dbGet [dbGet -p top.insts.instTerms.isTieLo 1].name

---------------------------------------------

VLSI cad design flow and associated tools:

Vlsi design flow involves making transistors in a particular technology by a Fab company. These Fab companies then give their transistor models, as well as pre designed digital as well as analog logic to bee used by the tools provided by 3rd party companies. These CAD design tools are run for different stages of design. We'll look at design flows using both proprietery tools and open source tools. For proprietery tools, we'll look at tools from Cadence and Synopsys. For open source, we'll look at Qflow.

Proprietary and Open Source CAD Design Tools

There are 2 big players in VLSI CAD design tools: Synopsys and Cadence. Both of them are public companies with revenues in order of $5B/year. Other smaller players are Mentor Graphics, Ansys, etc. All big players keep buying smaller players, who in turn buy even smaller players. Ultimately, there will be just 2 EDA companies which will serve most of the EDA market: Cadence and Synopsys.

Synopsys: Synopsys was founded in 1986. It was initially established as "Optimal Solutions". It acquired "Magma Design Automation" for about $0.5B in 2011, which along with Mentor Graphics were the 3rd and 4th biggest players in EDA market at that time. Synopsys had revenue of $5B and profits of $1B as of 2020. Synopsys releases versions of their tools almost once every quarter. On top of that, if some bugs were found and fixed in the previous release, they release service pack (SP) for that. So, for ex if they provide version of a tool as 2010.06-SP4 => it's released in year=2010, month=6, and is service pack 4.

Cadence: Cadence Inc was founded in 1988 by the merger of SDA Systems and ECAD inc. It has $3B in revenue and $1B in profits as of 2020.

Mentor Graphics (MG): MG was founded in 1981, and went public in 1984. Cadence was about to purchase "Mentor Graphics" in 2007, but then withdrew the offer. MG was finally purchased by Siemens in 2017, and renamed "Siemens EDA".

Magma Design Automation: MAGMA was founded in 1997, and rounded up the top 4 players in the EDA market. It initially focused on physical design software, but later broadened it's product protfolio to complete with other 3 top players. Magma's peak revenue was $200M in 2008. It was sold to Synopsys in 2011.

Ansys: Ansys was founded in 1970. It has revenues of $2B and profits of $0.5B as of 2020. However, it's VLSI CAD tool portfolio is pretty small, as most of it's products are in "finite element analysis" which is used to simulate computer models of structures, electronics, or machine components for analyzing the strength, toughness, etc.

Open Source: Next comes Open Source tools. These are fragmented, and no organization has taken the burden of developing them (No equivalent of Open Source Software or Linux here). These are primarily developed by individuals here and there.

These are the VLSI CAD software developed by various companies. We'll learn in detail about these tools in their respective sections. This is just an introductory material.

Purpose
Synopsys
Cadence
Open Source
Others
Logic Synthesis

Design Compiler (DC)

Fusion Compiler

RTL Compiler (RC)

Genus (uses CUI) - Latest 16.1

   
Place and Route IC Compiler (ICC)

Encounter Digital Implementation (EDI)

Innovus or Innovus Implementation System (uses CUI)

   
Static Timing Analysis (STA)

Primetime

PTSI (for noise)

PTPX (for Power)

Encounter Timing System (ETS)

Tempus (uses CUI)

   
RTL Signoff SpyGlass JasperGold    
Logical Equivalency Checker (LEC) Formality

Conformal or Encounter Conformal

Jasper (does lot more than Formal Verification)

   
Physical Verification (LVS, DRC, etc) IC Validator Pegasus    
Power Simulation  PrimePower?  Jules    
RC Extraction Star RC Quantus    
IR/EM Analysis   Voltus   RedHawk (from Ansys)
RTL Simulations VCS

Incisive NC-Sim

Xcelium

   
Schematic and Layout Editor Custom Designer Virtuoso    
SPICE simulation Hspice Spectre   LT Spice (from Liner Technology, free to use)
 DFT (Scan Pattern)  TetraMax

 Encounter Test (ET)

Modus ( uses CUI)

   
         
 
 



Digital ckt library considerations:

Before we can use CAD tools for digital design, we need to have digital libraries. Digital ckt library has all gates as AND, OR, FLOP, LATCH, etc that are needed to design digital circuit. if the chip is purely digital, then we just go with lowest nm Technology, as it gives the lowest area and hence lowest cost (since cost of a chip is directly proportional to area). However, when we have mixed signal design, where a significant portion of design is analog and only a small portion is digital (maybe 80-90% is analog and 10-20% is digital), then the choice of tech node is not that easy. We want to go with appropriate Tech node depending on our size and speed requirement. Usually in mixed signal chips, analog is bigger portion, so going with very low nm doesn't give much area saving to the total chip (since only digital shrinks, while analog is almost same size). Also, analog needs transistors which can withstand higher voltages, so gates with thicker oxide and larger L needed (large L implies lower current, so lower speed, but can withstand higher Vds voltage).


For a typical (typ) voltage of "V" volts, we guarantee proper operation of circuit at 90% of V and 110% of V. We allow these +/- 10% voltage variation to account for IR drop, voltage overshoot etc. These become max and min voltages for our chip operation. Besides these max, min and typ voltages, we also have vbox voltages.

vbox: (run at normal temp). These are test conditions to bound the part. These are extremely high or low voltages to which the part is never going to get exposed, but may be useful to run nonetheless, as they may point to fragile parts which are on cusp of failing. IDDQ Scan pattern is run at vbox hi/lo to make sure scan works. There is also "vburn in" cond, to detect early failure, done at high voltages. vbox/vburnin done for only digital ckt, so there has to be way to disable analog ckt, while doing vbox tests. scan_iddq tests run before and after vbox and any appreciable change in iddq is taken as a sign of failure.  

  1. vbox_hi : assume high voltage extremes can be used to accelerate failure mechanisms due to infant mortality failures. Vbox_hi puts lowest voltage that causes gate oxide to break or Bvdii (drn to src breakdown or drn/src to body jn breakdown) to occur. For Tox=75A, gate oxide breaks at 1000V/um*.0075um=7.5V, Bvdii is much lower at 2.5V, then Vbox_hi=2.5V for 1.5V transistor in a given 180nm tech.
  2. vbox_lo : assume Low voltage extremes detect failure mechanisms that would have occurred later in the product's life. Vbox_lo takes max threshold voltage of Nmos or Pmos and scales it upward by 40%. So, for 180nm tech, 1.5V transistor, Vth=0.7V, so Vbox_lo=0.7*1.4=0.98V. But sometimes ckt can't even get out of PORZ 9power on reset Z) at such low voltages, so design should be modified to allow operation of digital to happen at such low voltage.

 

400nm (0.4um) Tech Lib: This large nm tech is used in many mixed signal IC design. Digital transistors can handle upto 4V.

400nm tech is 3.3V digital library. Lmin=0.4um drawn (no shrink, so Final L=0.4um).
gate density = 14K gates/mm^2 for 2LM (2 layer metal), 19K gates/mm^2 for 3LM (3 layer metal), 23K gates/mm^2 for 4LM (3 layer metal)
nom: N_25C_3.3V (room temp, nominal voltage with nominal process)
max: W_150C_3.0V (max op temp of 150C, 10% below nom voltage) = max delay
min: S_-40C_3.6V (min op temp of -40C, 10% above nom voltage) = min delay

vbox:
hi: S_27C_5.00V (at max voltage part can run at)
lo: W_27C_1.02V (at min voltage part can run at)

210nm (0.21um) Tech lib: Even though digital ckt operates at 1.8V, transistors can survive upto 3.6V.

210nm tech is 1.8V digital library. Lmin=0.6um drawn (shrink=0.35, so Final L=0.21um)
gate density = 50K gates/mm^2 for 3LM, 75K gates/mm^2 for 4LM, 80K gates/mm^2 for 5LM. Gate density has improved substantially here, so it's advantageous to move to this 210nm tech, provided analog transistors can operate at such low voltage.
nom: N_25C_1.80V (room temp, nominal voltage with nominal process)
max: W_150C_1.65V (max op temp of 150C, 10% below nom voltage) = max delay
min: S_-40C_1.95V (min op temp of -40C, 10% above nom voltage) = min delay

vbox:
hi: S_25C_3.20V (at max voltage part can run at) => helpful to find hold margin,
lo: W_25C_0.95V (at min voltage part can run at) => usually not relevant as digital circuit can't run at such low voltage. Nevertheless we still run Timing runs to make sure all timing runs are clean.

Libraries with RC variants:

So far, we considered only the transistor in the process part of PVT. But inreality Resistance, capacitance and transistor all have process dependency.

VLSI Digital Flow:

Below are the various steps in taking an RTL to final gds to be taped out.

1. Synthesis: (DFT test synthesis is done within the synthesis tool). RTL to  gate synthesis done here.

The tool here takes the RTL provided and generates a gate level netlist for it. This gate level netlist doesn't have any floorplan or pin locations. It's just a verilog file containing the connections of all the gates. We take it Place and Route tool, which does the actual placement.

2. PnR: The synthesized netlist obtained in step 1 above is placed and routed


max/min delay libs. (W_150_1.65_CORE/CTS.db and S_-40_1.95_CORE/CTS.db used)
Leffile used: tech.lef and core.lef (pml30_lbc8_tech_3layer.lef & pml30_lbc8_core_2pin.lef used)
min/max cap tables used for metals (3m_nom/max/minC_nom/max/minvia.capTbl)
sdc file: we point to sdc file from DC synthesis which has all constraints (as i/o delay, clk waveform/freq, false_paths, etc)

A. create floorplan. provide floorplan size, add power routes and IO pins.

B. create max/min views for func/scan:
func_max = worst case lib, worst case capTbl, and constraints.sdc file from DC synthesis.
func_min =  best case lib,  best case capTbl, and constraints.sdc file from DC synthesis.
scan_max = worst case lib, worst case capTbl, and scan.sdc file.
scan_min =  best case lib,  best case capTbl, and scan.sdc file.

- constraints.sdc file has all clks/generated clks defined and sets case analysis with scan_mode=0.
- scan.sdc file has scan clk defined and sets case analysis with scan_mode=1. Here, no other clks need to be defined as in scan mode, scan_clk should be feeding to all the flops.

C. place:
- set analysis view for setup to func_max and for hold to func_min.
- propagate clk and run pre-place setup timing.
- place IO buffers, place design and then rerun setup timing.
- place spares, and then do post optimization to fix setup and drv (if required), and then rerun setup timing

D. CTS:
- we set case analysis with scan_mode=1. This is so that CTS is done using single scan clk. That way all clks are balanced in scan mode.
- CTS done honoring clks and skews,etc in .ctstch file for each clk specified (main clk, spi clk). generated clks not specified since we do CTS thru generated clks. Note: if we have scan, then we use scan_clk port for CTS (which is usually spi_clk) and no other clks are needed.
- we set case analysis back to scan_mode=0. Then run setup and hold. Many hold failures will be seen as clk skew will cause extra delay. It will cause seup issues also, if our setup slack was very small to begin with. However, some setup/hold paths may get fixed too.
- do post cts opt to fix setup, drv. Hold is usually not fixed here, as we'll fix it during route with some slack margin. Then run setup and hold.

E. Route:
- route done, and then native extractor (RC extract with effortlevel low) used for the first time to extract parasitics.
- setup and hold run.
- Opt done (with hold slack to 0.2ns, setup slack to 0.05ns) to fix setup, hold and drv.

F. STA:
- native extractor (RC extract with effortlevel low) rerun. Native extractor uses cap tables to look up Res, Cap, so is less accurate than QRC extractor.
- set analysis view for setup to func_max, func_min, scan_max, scan_min,  and for hold to func_max, func_min, scan_max, scan_min. For the first time, we run setup/hold for all views. Usually we see setup/hold time failures (as setup is run in func_min and hold in func_max for the first time here) here as well as scan timing failures (as scan mode is run for the first time for setup/hold) here. For our designs, we mostly see hold time failures, as setup has good slack to start with.
- Run setup and hold, and do post sta opt (if needed) to fix hold and drv.

G. Signoff:
- QRC extractor (RC extract with effortlevel signoff, coupled set to true) run. QRC extractor solves maxwell's 3D eqn to arrive at res, cap and does NOT use cap tables, so is more accurate.
- Run setup and hold, and do post sta opt (if needed) to fix hold (hold slack of 0.1ns), setup (hold slack of 0ns) and drv.

H. Filler: Filler cells added

I. Final checks: final connectivity, geometry and antenna checks done.

J. Export final: Put min/max SPEF, DEF and Verilog netlist in FinalFiles dir.
- max/min SPEF files generated using QRC extractor (RC extract with effortlevel signoff, coupled set to true).
- DEF and verilog netlist written out.

3. Timing: Now timing is run using some signoff timing tool that guarantees that all valid paths are timed and shows any failing paths.


- PT should see exactly the same paths that VDIO ETS was seeing.
- both setup/hold run for scan/noscan at min/max delay (wc/bc PVT) corners. Total of 8 separate runs.
- additional vbox hi/lo corner run for scan mode
- min/max SDF file generated

 - flow (PT):
  - Running PT for noscan_min, noscan_max, scan_min, scan_max, scan_vbox_min(vbox_hi), scan_vbox_max(vbox_lo) => repeat 6 times
   - set lib to std cell lib min/max lib (for vbox choose appr PVT)
   - read gate level verilog
   - read min/max spef file
   - read sdc constraint file (func or scan, for vbox choose scan constraint)
   - set analysis type to single mode (run only one corner at a time - max_func, min_func, max_scan, min_scan, max_vbox, min_vbox)
   - do checks, report timing for both setup/hold
  - Running PT for max and min sdf generation => repeat 2 times for max and min
   - set lib to std cell lib min/max lib
   - read gate level verilog
   - read min/max spef file
   - do checks, write sdf for min/max corners.

 - flow (ETS):
  - Running ETS for noscan_min, noscan_max, scan_min, scan_max, scan_vbox_min(vbox_hi), scan_vbox_max(vbox_lo) => repeat 2 times (one for vbox)
   - read lib for std cell lib min/max lib
   - read gate level verilog
   - create views  (same as in EDI)
     - create wc/bc std cell lib corner = wc_lib_set, bc_lib_set
     - create wc/bc rc corner = max_rc, min_rc (specify cap_table as well as qx_tech_file, but ETS only supports QRC extractor (notcap table based). However we soecify spef below so that gets used)
     - create constranits by specifying sdc files for func and scan mode.
     - create 4 analysis views => func_max, func_min, scan_max, scan_min
   - set analysis view => -setup {func_max func_min scan_max scan_min} -hold {func_max func_min  scan_max scan_min}
   - read min/max spef file => rc corner needs to be specified here, but isn't used for anything (just a syntax thing). spef files can be read only after analysis view is set.
   - set analysis type to bcwc mode (uses max delay for all paths during setup checks and min delay for all paths during hold check from min/max lib)
   - write sdf for min/max corners for view func_max, func_min (views don't matter for sdf files)
   - do checks, report timing for both setup/hold for view func_max, func_min, scan_max and scan_min separately in 8 diff reports.
   

4. Formal verification: Now we need to verify that the netlist generated by synthesis tool and PnR tool is exactly the same as RTL. This is called Formal Verification, which is the process of running all possible patterns on both RTL and gate netlist.

Formal Verification is supposed to be a push button thing, as Synthesized netlist is generated by the tool vendor, and it verifies the synthesized netlist with RTL to make sure it's correct. It should pass by default, else synthesis tool or LEC tool has a bug. However, various lib models may cause inconsistency.

Cadence Conformal is considered gold standard in LEC as it allows any 3rd party netlist to be checked against RTL. Synopsys's Formality requires some hints from synthesis tool to help it. Jasper is the latest Verification tool from Cadence that can do lot more than just formal verification. Jasper provides a wide range of Applications (Apps) covering: Formal Property Verification, Sequential Equivalence Checking, Low-Power Verification, Connectivity Verification, Config/Status Register Verification, X-Propagation Analysis, Structural Property Synthesis, and Behavioral Property Synthesis (just to name a few).

5. Scan Patterns DFT Tool: Once we have done scan stitching and added all scan related logic, it's time to run Scan patterns, and check what kind of coverage they provide.

Just running scan patterns doesn't guarantee that the patterns will run corrctly on the design. So, we also run scan simulations - which is essentially running scan patterns on gate level netlist (with sdf annotation) by writing a special testcase. Above 2 tools already provide built in testcase that we can use to run sims on patterns. This is called scan simulation.


6. RTL sims: Here we write bunch of testcases and run it on RTL. If the RTL logic has multiple power domains, then Power aware RTL (PARTL) sims can also be run.


7. Gate sims: Here we run sims on final gate netlist 9from PnR tool) instead of on RTL.


8. spyglass (optional)
9. icfb (to upload digital design to top level design)
10. patgen
11. power: power rail analysis using Encounter Power system (EPS from Cadence),
           redhawk (??) => for EMIR (used in veridian), Totem
           NEW: VOLTUS: power analysis tool (cadence). IC chip power consumption, IR drop, and electromigration (EM).

PDK (Process delivery kit) link: http://pdk.dal.design.ti.com/ (at Ti, we have pdk dir, which is used by our flow. All pdk info is kept in PCD (Process Control doc) for each process. Strawman PCD is built on simulated/extracted data (no Si data), Beta PCD is after the process baseline has been set and Production PCD is after verifying it with Si.

OA (open Access) PDK is now being used everywhere, which has database in OA format. OA format and API is developed by Si2 (Si2.org) and is free and open to everyone.

Mentor (process tech info)  website: https://mentor.itg.ti.com
----------

FreePDK (from NCSU): base kit - http://www.eda.ncsu.edu/wiki/FreePDK
FreePDK (from Nangate): generic open cell lib based on 45nm.

Free CAD tools are here: http://opencircuitdesign.com/index.html

------------------
Scribe Line Structures: Test structures needed to verify the PCD.

-------
manufacturing grid : The manufacturing grid is the grid on which all design-rules are based. No shape may exist in the database that is not aligned to this grid. The manufacturing grid is 2.5 nm for this 45nm process on FreePDK45. Higher the MG, lower the cost of mask tooling.

LBC7: For TI LBC7 process, MG is 0.050 um, even though min coding increment of 0.10um is required. This is to accommodate centerpoint and centerline figures. Sizes that are not on grid, are snapped to nearest grid. final mask size adjust is a combination of the design size adjust, which adjusts sizing relative to the minimum grid size, the selective size adjust, and the process size adjust which compensates for process manufacturing offsets, amd may be different for different layers. A shrink of 0.9 is done from the Drawn CD to the final Reticle (mask) CD for LBC7.

------------------------
CDB PDK Dir: This has all lib data (schematic, layout, etc) in cadence database (cdb) format.
-----------
/db/pdk/<Process_name> (Process_name can be for fab process, packaging, foundry, etc)
lbc*/tsmc*/umc* => all fab process
bicom* => all bicmos info
foundary =>
sample_pdk =>
copper => metal/via rules put seprately here, since metal rules not mainatined by lbc7,etc process platform.
packagaing => all pkg rules here,  since pkg rules not mainatined by lbc7,etc process platform.

--------
Most used Process_name : lbc7, lbc8(shrinking factor of 0.35 applied to drawn design for lbc8), tsmc*, umc*, bicom*

LBC7 dir: /db/pdk/lbc7/rev1/

DIGITAL LIB section
--------------------
digital lib dir: diglib/msl270/r3.0.0/. In this we have following dir:

verilog dir: all verilog models here
------------
verilog/models/*.v: models for all stdcells in terms of verilog primitives (nand, or, not, pullup, etc).  It has also defines for TI_functiononly, TI_openhdl, etc.
For ex: AN210.v has the gate modeled as:
and #0 TI_AND_PRIM0 ( Y , A , B ) ; //Verilog Structure section (in terms of gate prims). and gate has 0 unit delay. we define gate delay as 0 and delay mode as "distributed" and timescale as 1ps in TI_functiononly mode, so that delays are added for each gate in ps. This doesn't affect much as delays are already 0, so total delay is also close to 0ps. However for non function mode, we define module delay as #ns, and set delay mode to "path" delay so that module delays are used directly. In non function mode, we define AN210 gate delay as #0.01 (equals 0.01ns with 1ns timescale directive). However, from ATD page, this gate has delay of about 1ns. So, delay not defined correctly. Doesn't matter as we don not use this delay value. We use the delay that comes directly from the gate delay in sdf file.
For delay cells (as DLY03), we model delay as 3 time unit delay (3ns for 1ns timescale).
For filler cells (as SPAREPOLYCAP32), we don't have anything in module defn.

verilog/verilogsrc(ams)/msl270_lbc7_*pin/*.vams : verilog analog models for all std cells. (4 variations of same cell: 2pin, 3pin, iso_2pin, iso_3pin). Note this wasn't there for normal verilog models as they don't model this variation in structure. 2 pin has VDd/VSS pins. 3pin and iso_2pin has additional PBKG pin. iso_3pin has further additional VSS_ISO pin.
Ex: AN210.vams => has 2 extra pins VDD,VSS as electrical pins (has additional PBKG pin for 3pin variation), and A,B,Y have sensitivity to VDD/VSS. rest of the structure section is same.

synopsys dir: all CORE.lib and CTS.lib here (same for all 4 variations of std cells)
--------------
synopsys/src/MSL270_*.lib: src lib files for various PVT corners for both CORE and CTS
synopsys/bin/MSL270_*.lib: binary .db files for various PVT corners for both CORE and CTS (derived from .lib files)

vdio dir: all lef and cap/res files
---------
vdio/lef/msl270_lbc7_tech*.lef: tech lef file. has metal/via width/spacing/antenna_ratio info. Diff tech files for 2/3/4 layer
vdio/lef/msl270_lbc7_core_*pin.lef: std cell lef files for all 4 variations (2pin, 3pin, iso_2pin, iso_3pin). For each std cell, lef file has physical metal layout info for i/p, o/p pins and VDD/VSS and/or BLKG/VSS_ISO. This doesn't have internal guts of cell, but just the pin and blkg info needed for routing.
vdio/captabl/2lm_maxC_maxvia.capTbl:  use these LUT values for calc timing, instead of doing full extraction using Maxwell's eqn. defined for 2/3/4 metal layers for max/nom/min Cap/Res. has Cap table (which has cap values for diff width/space for diff metal layers), and various metal/via process variations (min Width/Space, height, thickness, resistance, thermal coeff,etc)

variation dir: all cdb data for all stdcells (schematic, symbol, layout, verilog)
--------------
msl270_lbc7_2pin/<std_cell>/layout|schematic|symbol|verilog|srcVerilog|srcVerilogAMS : similarly for iso_2pin, 3pin and iso_3pin. => note: schematic dir has sch.cdb and master.tag(master.tag just has sch.cdb written in it), layout dir has layout.cdb and master.tag((master.tag just has layout.cdb written in it), etc. everything is stored as cdb(cadence data base).

PAL dir: gdsii data for all stdcells kept here
-------
PAL/pml30CorePall/CORE/gdsii/CORE.fram.gdsii => for CORE cells
PAL/pml30CorePall/CTS/gdsii/CTS.fram.gdsii => for CTS cells

ANALOG DIR section
-----------------

drc rules: rules/assura/2010.12.22/
Assura is physical verification tool (both lvs/drc) from cadence. Its integrated with extraction tools.
drc.releaseNotes.txt => find all info related to drc files (drc rules are usually in drc.rul file, which incudes files from "files" dir).
copper rules are in /db/pdk/copper/rev1/rules/assura/*
packaging checks are in /db/pdk/packaging/rev1/rules/assura/*

qrc/ => has qrc tech files (in binary format) for metal/via layers.
QRC is 3D full-chip parasitic extraction and analysis tool from cadence. it includes an integrated field solver and does an RLCK extraction for cells, RF, analog, mixed signal, custom digital, etc.

------------------------
OA PDK Dir: This has all lib data (schematic, layout, etc) in open access database (oa) format.
-----------
/db/pdk/<Process_name>/<rev>/
Let's look at lbc8 dir: /db/pdkoa/lbc8/2011.06.26/

In this, we have following dir:
cdk/          copper/       diglib/       esdlib/        models/       releaseNotes/ rules/

DIGITAL LIB section for diglib dir:
--------------------
digital lib dir: diglib/pml30/. In this we have following dir:

verilog dir: all verilog models here, same as in CDB PDK.
synopsys dir: all CORE.lib and CTS.lib here. same as in CDB PDK.
vdio dir: all lef and cap/res files. same as in CDB PDK.
PAL dir: gdsii data for all stdcells kept here. same as in CDB PDK.
OA db dir: all OA data for all stdcells (schematic, symbol, layout, verilog)
ex: pml30_lbc8_2pin/BU110/ has following subdir:
*.oa contains actual data in oa format, while master.tag is ascii file with just the name of oa file that contains data. NOTE: even .oa files are not large, as they just have references to transistor, vias, metal lines, etc and don't contain the actual drawing.
schematic: sch.oa,    master.tag, data.dm
symbol:    symbol.oa, master.tag
layout:    layout.oa, master.tag
abstract:  layout.oa, master.tag => similar to layout dir, but is slightly smaller in size.
module, srcVerilog, srcVerilogAMS:  all these dir have same content =>  netlist.oa, master.tag, verilog.vams.

-------------------------------------------


ECO Flow:
-----------
Many types of ECO flow supported in Encounter L/XL/FE-L/FE-XL (Enc version 7 and above). GXL and conformal ECO have additional support for ECO. GXL supports ecoRemap, while conformal supports conformalECO.

These ECO flows supported in Enc:

1. Pre-Mask changes from ECO file => note: eco file generates a new verilog netlist, which is used by subsequent steps.
2. Pre-Mask changes from new verilog netlist =>
loadConfig old.conf, set rda_Input(ui_netlist) "newchip.v", ecoDefIn oldchip.def, ecoPlace, ecoRoute, ..
3. Pre-Mask changes from new DEF file => similar as 2 above, as def file contains new logical cells/connections. An ECO file is generated by ecoCompareNetlist, loadECO loads this eco file, and then follow as in 2.
4. Post-Mask changes from new verilog netlist => use -postMask for ecoDefIn to minimize mask changes. Otherwise same as pre-mask.
5. Post-Mask changes using ECO spare cells or GA(gate array) cells. => preferred method at TI

-----------------------------------
2 methods for generating new eco netlist: (we've the new 1p1 rtl with our fixes in)
1. Manual: Here, we look at 1p0 netlist in debussy and figure out where to place gates in 1p1 netlist. Then, we an modify netlist in 2 ways: (In both of these ways, new netlist only has new gates added with appr connections. These new gates are not connected or matched to spare gates as the VDIO tool is supposed to do those connections).
 A. eco directive file: create eco_directive file that has the list of gate changes and connections. What this does is that it adds new gates with appr connections to netlist. We should look at this new netlist generated before we move to VDIO. Then we read in this file in VDIO.
 B. directly modify old ntlist: We directly modify old netlist and then read in the new netlist. This new netlist has new gates added with appr connections. It's similar to above option except that we don't use eco directives.

Then we run VDIO which reads in the old def and the new netlist (either eco directive or new verilog netlist). The tool then matches the changes with spare gates, places them and routes it to create lec clean netlist and def. then we run final checks, timing, etc. We discuss this methos below.

2. Conformal: Here we use conformal lec. We modify rtl for 1p1. Then we run thru Synthesis to create 1p1 netlist. Then we run conformal lec which diffs old 1p0 PnR netlist with newly synthesized 1p1 netlist. It creates a patch, and then generates a new netlist with the changes. This new netlist uses spare cells, so spare cells are already mapped to new logic within netlist (you'll see that spare cells are removed from the spare module). In manual option above, no spare cells were mapped to new logic. Then we go thru regulat PnR flow to accomodate this patch in VDIO. We dsicuss this method in later section marked conformal_eco.

NOTE: Regardless of which method we use for generating netlist, we have to run ecoDesign in VDIO (super cmd which does everything for us). We can also make changes directly to verilog and run ecoDesign.
#ecoDesign cmd is supported in all Enc version, so use ecoDesign. It takes EDI System database and a modified netlist as input and performs ECO operations. It restores the design, examines the changes in the new netlist, and automatically implements the required  changes  with ecoPlace and ecoRoute. deffile is not there in enc.dat database, but it's OK as rout.gz and place.gz has that info.
ecoDesign -postMask -modifyOnlyLayers <MLb>:<MLt> -spareCells <spareCellName> -useGACells <GACoresite> <old_design.enc.dat top_cell new_netlist> => Use -noEcoPlace -noEcoRoute if we don't want to ecoplace and ecoroute with this cmd, but want to do it separately.
 
---------------------------------------
Manual ECO (non conformal):
---------------------------------------
Inreactive ECO: provides manual inc updates to design.
#we can also do an interactive ECO by going to optimize->interactive ECO for PreMask changes. We can add repeater(ecoAddRepeater), upsize/downsize instances(ecoChangeCell), delete buffers(ecoDeleteRepeater), display buffer tree to modify it(displayBufTree)

#For PreMask/PostMask ECO, we can also do file->ECO design. Then goto Place->ECO Place for placement. then do routing by route->Nanoroute (choose ECO route) => instead of doing it on the encounter cmd line

------------------------------------
ECO changes using ECO spare cells (post mask):
------------------------------------
Flow for Making changes from ECO file: (preferred method at TI)
load old config file => load new_change.eco file (modifies old verilog to get new verilog) => ecoDefIn old_def file => specifySpareGate => ecoPlace => ecoRoute =>save design

#0. make new(1p1) dir and cp files from old(1p0) dir & then change mode:
cp -rf /db/YELLOWSTONE/design1p0 /db/YELLOWSTONE/design1p1
chmod 777 -R /db/YELLOWSTONE/design1p1

1. Update RTL in Source dir.
#run debussy in old dir on gate level netlist to look at gates to make changes in gate level netlist
cd /db/EPSTINGRAY/design2p0/HDL/Debussy.
Run create_symbols if *.lib++ dir not present
run_debussy => runs debussy with/without any options. If with options, "-f <gate_netlist>". If w/o options, bring up debussy, click  File->Import design, Put the file name (/db/DRV9401/design1p0/HDL/FinalFiles/digtop_VDIO.v) in bottom box, and then click Add, then OK.

2A. Run ECO flow in VDIO after running encounter (tcl/eco_flow.tcl => this file has all cmds in it for eco flow)
NOTE: to add/delete any pins, use PinEditor in VDIO gui (Edit->PinEditor). Do this before running ecoplace. Else, new pins are added at origin (0,0). To edit/add pin, you can also use "editPin" cmd:
#editPin -pinWidth 0.4 -pinDepth 0.4 -fixOverlap 1 -side Left -layer 3 -assign 0.0 367.85 -pin RX_SEL[4]

1st option: run ecoDesign with no place and route. This super cmd is explained above.
-----------
ecoDesign -postMask -noEcoPlace -noEcoRoute -spareCells spr_*/spr* dbs/filler/filler.enc.dat digtop /db/HAMMER_OA/design1p1/.../digtop_final_route_eco.v

2nd option: do it old way of reading in 1p0 config, 1p0 def and 1p1 directives.
----------
loadConfig dbs/filler/filler.enc.dat/digtop.conf => load previous config file for VDIO. Change dir path (set cwd) to present dir in this file. Config file only has path locations of digtop.v (note that even though this is verilog file for filler, it doesn't have any filler cells in it, as they don't exist in tech .lib file), tech .lib and tech .lef files. It doesn't have def file info, so we have to do DefIn to read def file.

# Read 1p1 eco_directives => This adds new cells, makes new connections etc to 1p0 verilog file to make new 1p1 file.
source tcl/eco_directive_1p1.tcl
# Or other option is to modify old netlist manually to create new netlist, and read that new netlist.
#set rda_Input(ui_netlist) "/db/.../digtop_final_route_1p1.v" => This overwrites the "i/p netlist" in loadConfig above
#commitConfig => This commits the config file so that all parameters spec above are applied


# ECO DEF in the old DEF file. since new verilog design is in memory and we are reading old def file, tool can figure out new changes for 1p1.
#-useGAcells GACoresite => specifies GA Core site to use for gate array eco. In cell lef file, it looks for cells with "SITE = GACoresite name" specified here. Regular stdcells have "SITE CORESITE", while GA cells have "SITE GACORESITE". That's how tool knows which cells are ECO cells that can be built from filler cells. this cmd implies "postmask" mode.
#-suffix _SPARE_DELETED => Appends the specified suffix to cells that appear in the DEF file but have been deleted in the new netlist. Default: _SPARE
#-postMask => When used with -postmask option, tool can only change nets, not cells. tool checks for cells that exist in memory but not in def, and marks them as unplaced. It then maps these to fillers/spares during ecoPlace. Modified nets which are found in both memory and DEF file, but whose connections are different, are processed during ecoRoute. When -postMask option is not used, it implies pre-mask mode, which can change cells too (it can put any new cell in empty space of fillers).

ecoDefIn -postMask -reportFile ecoDefIn.rpt /db/YELLOWSTONE/design1p0/HDL/FinalFiles/digtop/digtop_final_route.def => Restores physical information from an old design and compares this information with the new design (modified verilog using directives bove). It gives report on screen saying what new inst/net were added, etc (also dumps it in report file ecoDefIn.rpt specified above).

# Specify Spare Gate (-inst specifies instance name and NOT module name) This is needed only if we are not using gate array as spares (i.e. GA cells are not specified above).
specifySpareGate -inst spr_*/*spr_* => use any spare cell in spare modules.
specifySpareGate -inst spare_*/spare_inv_* => if you want to use only inverters
specifySpareGate -inst I_scan_iso_out/g1453 => This specs any extra gate (unused) as a spare cell. This is useful when we have some usuned gates in netlist that we want to use for eco purpose.

After running one of the options above, run ecoPlace and ecoRoute.
-------------
ecoPlace -useSpareCells true

#user intervention to change spare cell mapping. Provide instance name and NOT module name
ecoSwapSpareCell i_inst/eco_inst_an2 spr_3/spr_gate65 => gate "spr3/spr_gate65" from spare cell module is mapped to i_inst/eco_inst_an2. Here eco instance an2 was already mapped to some spare cell, but we didn't like the mapping, so we swap this cell with this other spare cell. Now, spr_gate65 becomes eco_inst_an2, and eco_inst_an2 becomes spr65

# ECO Route new netlist. If only certain metal layers, specify them
ecoRoute -modifyOnlyLayers 1:3
setNanoRouteMode -quiet -drouteFixAntenna true => set this if antenna errors still remain
ecoRoute -modifyOnlyLayers 1:3 => rerun eco route if errors are still there

#ecoRoute may not be able to route because it doesn't touch non-eco nets. Rerun ecoRoute until all errors are fixed. If errors still remain, we can run Nanoroute directly in eco mode. However, ecoRoute is still preferred to be run since it does preprocessing which minimizes routing changes. Cmds below do the same job as ecoRoute above, but can move non-eco nets too.
setNanoRouteMode -quiet -drouteFixAntenna false => optional, improves routing.
setNanoRouteMode -quiet -routeWithEco true
setNanoRouteMode -quiet
#setNanoRouteMode -routeEcoOnlyInLayers 1:3 => can use this single cmd or use these 2 cmds:
setNanoRouteMode -quiet -routeBottomRoutingLayer 1 => bottom routing layer has to be the lowest layer on which there are existing nets. Otherwise error says: "conflict with already existing routed wires on layer x-1"
setNanoRouteMode -quiet -routeTopRoutingLayer 3 =>  similarly, top routing layer has to be the highest layer on which there are existing nets.
setNaonoRouteMode -quiet routeSelectdNetOnly false
routeDesign -globalDetail => instead of this, we can also use: globalDetailRoute 100.0 1200.0 350.0 600.0 => specify co-ords if you want to reroute within a certain area. Although globalDetailRoute and "routeDesign -globalDetail" seem to be doing the same thing, they result in different results. "routeDesign -globalDetail" gives better results

#NOTE: keep on rerunning "routeDesign -globalDetail" until it passes all drc. (set antenna fix to false)
#Do not use the globalRoute command in ECO mode (use globalDetailRoute as shown above. globalRoute only performs global routing, while detailRoute only performs detailed routing).
#If more than 10 percent of the nets are new or partially routed, run full global and detailed routing instead of ECO routing (set routeWithEco false so that routing is done from scratch. Most of the times, it fixes all routing issues.)

#to route only a list of nets, which are in selectnets.txt file (one net per line)
set NET_FILE [open "selectnets.txt"]
foreach i [ read $NET_FILE ] {
 selectNet  $i => can only select one net at atime. wildcards are allowed
}
close $NET_FILE
#route these nets
setNanoRouteMode -routeSelectedNetOnly true => routes selected nets only. default is false (routes all nets).
routeDesign -globalDetail

#set_attribute -net <netName> -skip_routing true => to skip routing on selected nets, useful when we don't want to touch nets which are on layers above or below the eco routing layers. Since we can only specify one net_name, we have to use this cmd multiple times for multiple nets. However, this is dangerous to use, as set_attribute ties the attribute with that net, and it is saved with the database. so, next time encounter runs on this database, this attribute is still there, unless we set attribute to false.

# Save design
saveDesign ./dbs/eco_filler_1p1/eco_filler_1p1.enc
checkDesign -noHtml -all -outfile ./dbs/eco_filler_1p1/eco_check_design.rpt => checks design for all issues. Necessary as final_check.tcl later doesn't check for floating nets, etc.

2B. with eco_flow.tcl, we saved the new db into filler.enc. So, we need to run steps beyond filler, to run all checks and get the final netlist.
#Note: we should run timing as extra step since we should make sure design is timing clean, before we run PT.
timeDesign -signoff -reportOnly       -prefix digtop_post_route_signoff
timeDesign -signoff -reportOnly -hold -prefix digtop_post_route_signoff

#run post route opt if timing not met or any other violations
setOptMode -effort high
optDesign -postRoute -hold -prefix digtop_post_route_opt

#now run steps beyond filler
source tcl/final_check.tcl => to verify conn, etc.
source tcl/export_final.tcl => run extractRC to generate spef, get final verilog and defout.

3. run Formality to check RTL against ECO netlist. (For ECO netlist, use verilog generated above)
4. Rerun PT to check if timing is ok
5. Rerun all RTL sims and gate sims
6. Regenerate Tetramax patterns and rerun gatesims
7. Import Netlist and DEF to Cadence for top sims and tapeout

------------------------------------
ECO changes using gate array cells (post mask):
------------------------------------
When we use GA cells, we don't have any GA cells in netlist. For doing ECO change, we modify the original netlist to add ECO cells ending in E (made from GA cells) , and then in the layout, appr filler cells are connected to form these GA cells.
These ECO cells have a prefix E at the end to indicate that they are GA cells (i.e IV110E). These cells are generated from base filler cells (FILLER5, FILLER10, etc) which are present in layout. These filler cells were inserted in layout during the filler step (where these base filler cells are filled first and then any remaining spaces are filled with Dcap fillers). Tool figures which E cells can be replaced with which FILLER cells is by looking at physical of cells. NOTE: filler cells are never removed from design. Tool just picks up appr space where ECO cell can be placed, and makes metal connections to reflect the change. Filler cells are still unmodified under the ECO cell.

1. first make sure that ECO filler cells were put in 1p0 design using these 2 cmds:
addFiller -cell  FILLER5 FILLER10 FILLER15 FILLER20 FILLER25 FILLER30 FILLER40 FILLER50 FILLER55 -prefix FILLER_ECO
addFiller -cell  SPAREFILL1 SPAREFILL2 SPAREFILL4 SPAREMOSCAP3 SPAREMOSCAP4 SPAREMOSCAP8 -prefix FILLER_NORMAL

2. get the i/p netlist from 2p0 and modify it manually to add the new eco cells (ie IV110E, AN210E, etc) that need to be added, connecting them appropriately. name it as dig_top_noPhys_2p1.v
ex:    IV120E eco2_2p1_inv27 (.Y(eco2_2p1_prdata_27_bar), .A(eco2_2p1_prdata_27));

3. Once done with changes, read the old layout db and new netlist
ecoDesign -postMask -noEcoRoute -noEcoPlace dbs/handoff_20121106.enc.dat dig_top /data/PROJECT/.../dig_top_noPhys_2p1.v => we don't do eoPlace and ecoRoute as we do it in separate steps

4. do ecoplace using GA cells
ecoPlace -useGAFillerCells "FILLER55 FILLER50 FILLER40 FILLER30 FILLER25 FILLER20 FILLER15 FILLER10 FILLER5" => these filler cells should not have FIXED attribute in def file, else tool would not pick these for replacement, and will try to move around other std cells (which is incorrect).

5. If placement causes any errors (like overlapping placement etc), fix it by deleting and moving
eg: deleteInst FILLER_pdLogic_18457 (delete and then move/add filler cells manually and place them at correct location)

6, do ecoRoute
ecoRoute

7. If ecoRoute doesn't fix all routing violations even after multiple attempts, do full blown routing. Can be done from GUI also: Route->NanoRoute->Route. Unselect ECO ROute and select "Global Route, Detail Route".  Check "Fix Antenna". If you want to fix net by net, select "selected nets only" and select on those nets on encounter layout gui.
   setNanoRouteMode -quiet -routeWithEco true => may be set to false if we want to do full blown route
   setNanoRouteMode -quiet -drouteFixAntenna true
   setNanoRouteMode -quiet -routeTopRoutingLayer default
   setNanoRouteMode -quiet -routeBottomRoutingLayer default
   setNanoRouteMode -quiet -drouteEndIteration default
   setNanoRouteMode -quiet -routeWithTimingDriven false
   setNanoRouteMode -quiet -routeWithSiDriven false
   routeDesign -globalDetail -viaOpt -wireOpt

8. confirm location of newly added cells and then save new db
 selectInst *_2p1*
 saveDesign ./dbs/handoff_eco_2p1

9. Now run steps 2B and beyond (step 2B, 3-7) as shown in Normal filler cell flow. Run timing, optDesign and then steps beyond filler.
----------------------
ECO directives (old way):
---------------------
to make new connections, we use the 5 cmds listed below in tcl/eco_directive_1p1.tcl.
NOTE: all cmds specify instance name of any module, and NOT the module name itself. However, if we use -moduleBased, then we specify module_name itself. As module_name is unique for each instance in the synthesized netlist (see final synthesized netlist format desc above), it works OK.

addModulePort: To add port or bussed port to a module. Module should not have net with that name. Ex:
addModulePort i1/i2/i3 p1 input => adds i/p port p1 on instance i3(in hier i1/i2).

attachModulePort: Attaches a port in the specified instance (or top level) to a net. Seems like this cmd attaches ports to nets outside the module (i.e the net has to be at a higher level of hier than the port). This cmd doesn't detach anything, so detachModulePort cmd also needed. (this is different than attachTerm which does detach automatically)
attachModulePort i1/i2/i3 p1 i1/i2/n1 => connects port p1 on i1/i2/i3 to the net i1/i2/n1.

detachModulePort: Detaches the net connected to the specified port on the specified instance.
detachModulePort i1/i2/i3 p1 => detach port p1 from module i1/i2/i3

addNet: Adds a net to the design. The net can be logical or physical. Ex:
addNet i1/n1=> adds net i1/n1

addInst: Adds an instance and places it in the design. Ex:
addInst -cell BUF1 -inst i1/i2 -loc 100 200 => adds buffer instance i1/i2 at location 100, 200. (-cell specifies master of instance while -inst is the actual instance)

attachTerm: Attaches a terminal to a net. If the terminal already connects to a different net, the software first detaches the terminal from the current net, then attaches it to the new net. Previously we used to use: detachTerm to detach the existing net, but not needed any more. Ex:
attachTerm i1/i2/i3 in1 i1/i2/net26 => attaches terminal in1 of instance i1/i2/i3 (in1 is a port of i3) to net i1/i2/net26

NOTE: addModulePort and attachModulePort cmd can be avoided by using attachTerm which is more generic cmd.
#For ex, to connect internal gate o/p within one module to internal gate i/p within another module, use this:
attachTerm  spi_regs/eco2_inv A clock_reset_gen/n37 => attaches terminal A of inv in "spi_regs" module to net n37 in "clock_reset_gen" module. Note, this cmd first figures out the port thru which pin A of inv can be accessed, and then connects the net to that port. So, if there are multiple connections to that port, all of them will get conncted. In essence, this cmd connects a port to a net. If the port doesn't exist, it creates a port in the module (spi_regs) with that netname (n37).

#alternative way would be to have ports and then connect them
addModulePort     spi_regs  eco2_spi_inp input
addNet     -moduleBased digtop eco2_rst_connect
attachModulePort  spi_regs  eco2_spi_inp eco2_rst_connect => created port for "spi_regs" module and connected net to it.
addModulePort     clock_reset_gen eco2_clk_reset_out output
attachModulePort  clock_reset_gen eco2_clk_reset_out eco2_rst_connect => created port for "clock_reset_gen" and connected same net to it.
attachModulePort  clock_reset_gen eco2_clk_reset_out clock_reset_gen/n37 => connects the other end of port to net n37

Steps: to do ECO fix, first find the net name to where u want to insert ur logic. Use debussy on gate verilog and find the net on schematic of that module. If o/p net of new logic goes to fewer instances compared to i/p net of new logic, leave i/p net as existing net name and make o/p net as new net. Do vice versa (i.e if o/p goes to more instances, leave o/p net as existing net and make i/p net as new net)
Ex: attach an inverter to input of flop
#-moduleBased <module_defn_name> sets the module defn_name so that hier is not required. Note we specify module_defn_name and NOT module_instance_name. As module_name is unique for each instance in the synthesized netlist (see final synthesized netlist format desc above), it works OK.
addInst    -moduleBased spi_regs_test_1 -cell IV140 -inst eco1_inv_before => add inx instance
addNet     -moduleBased spi_regs_test_1 eco1_reg_input => add new net to connect to o/p of inx
attachTerm -moduleBased spi_regs_test_1 eco1_inv_before A n610 => attach inx i/p to existing net n610 (we chose i/p since n610 might be driving multiple loads)
attachTerm -moduleBased spi_regs_test_1 eco1_inv_before Y eco1_reg_input => attach inx o/p to new net.
attachTerm -moduleBased spi_regs_test_1 vbg2_op_out_reg_2 D eco1_reg_input => attach FF D i/p to new net. This causes exising net n610 to disconnect from D i/p of FF. If n610 wasn't connected to any other o/p, it would become floating.

---------------------
conformal ECO:
-------------------
Here, conformal generates the patch that can be used in VDIO. Run LEC:
lec -12.10-s400 -gui -xl -ecogxl -log ./logs/eco.log -dofile scripts/eco.do => -ecogxl enables post mask eco.

eco.do file:
-----
1. set common settings:
set log file eco.log -replace
usage -auto

2. Read library: (note that .liberty files are read, and NOT verilog models)
read library -both -liberty /db/pdkoa/.../MSL270_N_27_3_CORE.lib
read library -both -liberty /db/pdkoa/.../MSL270_N_27_3_CTS.lib  -append

3. Read design. Read 1p0 final PnR netlist as golden and new synthesized netlist as revised.
#NOTE: spare cell modules are missing from revised netlist since they are added in PnR flow
read design -verilog -golden  -sensitive -root digtop /db/HAMMER_OA/design1p0/HDL/FinalFiles/digtop/digtop_final_route.v
read design -verilog -revised -sensitive -root digtop /db/HAMMER_OA/design1p1/HDL/Syhnthesis/digtop/netlist/digtop_scan.v

4. set eco directives:
A. enable ECO meodeling directive. It's necessary to have this for eco. Other directives are optional.
 1. set flatten model -eco => prevent default flatten modeling from removing important info that is vital to correlate the ECO change back to original netlist. It's a macro that enables a number of related modeling options such as "set flatten model -keep_ignored_po -noremove_real_buffer" etc.
 2. set flatten model -enable_analyze_hier_compare => analyze hier bdry (module bdry) comp of flattened design. Needed to do hier comparison later.

B. other directives:
set flatten model -Latch_fold => if needed
set flatten model -seq_constant => if needed
set flatten model -gated_clock => if needed Gated clock control

#scan shift en turned off since scan chain may be different in new synthesized netlist. scan_mode still allowed both 0/1 values as scan_mode signal is just like any other logic signal.
add pin constraints 0 scan_en_in     -golden =>
add pin constraints 0 scan_en_in     -revised =>

#specify any new pins added/deleted at top level
#ex: new pin new_in2 added at top level digtop which also goes into submodule ctrl. so, we add this eco pin in both modules for golden (since they don't exist in golden. If we don't specify ports for modules/sub-modules, then tool is not able to add it for golden, and so reports them as unmapped points.). -input specifies i/p pin, while -output specifies output pin (default is input).
NOTE: We specify module definition name and NOT instance of module, as pin needs to be added on module defn.
add eco pin scan_out_iso  new_in2 -input -golden => use delete eco pin for deleting pins.
add eco pin scan_out_iso  new_out2 -output -golden => scan_out_iso is a submodule but still referenced as module since conformal flattens the design.
add eco pin clock_reset_gen_test_1 new_out2 -output -revised => clock_reset_gen_test_1 is the module defn name in revised netlist.

set mapping method -unreach

5. analyze hier bdry and do hier comaparison (lec mode). It should show non-eq points. Then create patch based on that.
set system mode lec
#while doing hier comp, hier_analyze.do file is generated which has all cmds for hier comparison. ecopins.do file is also generated which has eco cmd for adding pins to modules which need it in new netlist
analyze hier_compare -dofile hier_analyze.do -replace -constraints -verbose \
                     -threshold 0 -noexact_pin_match -noexact_module_match \
                     -eco_aware -input_output_pin_equivalence -function_pin_mapping -ecopin ecopins.do

add compared points -all
compare => should show non eq points
report statistics => reports

compare eco hierarchy => we break down comparison to sub-module level
report eco hierarchy -noneq -verbose
analyze eco -hierarchical patch.v -replace -ecopin_dofile ecopins.do -preserve_clock => creates a patch file which has only the gate changes needed for 1p1

6. apply patch, then optimize patch based on spare/GA cells, then write final netlist.
set system mode setup
dofile ecopins.do => add pins needed to modules.
#apply patch: -auto Automatically reads in and applies all patches that were created with the ANALYZE ECO cmd in the current session. (-keephierarchy specifies that the ECO changes will be put in a sub-module. Do not use this option, as that will cause problems in VDIO)
apply patch -auto => shows patch file being read and applied.

#### this section optional = to check if patch is good
set system mode lec
add compare point -all
#delete compare point A_REG[0] => To omit certain non-equiv points from eco analysis
compare  // this checks if patch if good before optimization, design should be equiv (1p0 vs patch)
write eco design -replace -newfile digtop_tmp_eco.v => IF we write out netlist, it will have separate eco modules which will have the new instances/connections. Later, after doing optimize patch, we get netlist which has no separate eco mdules, but the changes are within the existing modules. The netlist with no separate eco modules is the one that can be used in VDIO, else it will give an error for having extra modules.

NOTE: we could stop here and use the netlist generated above in VDIO. However, there are 2 problems. First, the netlist has extra modules, and secondly it has cells in eco_modules which may not be present in spare_cell module, so these will need to be substituted by cells which are actually present in spare_cell module. So, optimize patch step needed.

####
set system mode setup
###spare/GA cells added for Post-mask eco only. For pre-mask, ignore this section
add spare cell -freedcell => add any freed up cells to be used as spare cells
add spare cell -deffile  /db/HAMMER_OA/design1p0/HDL/FinalFiles/digtop/digtop_final_route.def -sparecell spr_*/spr* => this adds all spare cells to be used for eco
#add spare cell -deffile  /db/HAMMER_OA/design1p0/HDL/FinalFiles/digtop/digtop_final_route.def -sparecell GAFILL* => this adds all GA cells for eco
#delete spare cell -sparecell spr_*/spr_AN2* => This disables any specific spare cell that we don't want to use
report spare cell => This shows all avilable freed cells as well as spare cells
###

#optimize patch does the actual mapping to get new netlist generated
optimize patch -verbose  -usespare -workdir WORK \ => for postmask using spare/GA cells, use -usespare
-library "/db/pdkoa/.../MSL270_N_27_3_CORE.lib \
          /db/pdkoa/.../MSL270_N_27_3_CTS.lib" \
-netnaming eco_net_%d \ => within each module, new eco nets named as eco_net_1, eco_net_2, etc
-instancenaming eco_instance_%d \ => within each module, new eco instances named as eco_instance_1, eco_instance_2, etc
-rcexec "rc -12.20-s014" \ => version of rc to use
-sdc /db/HAMMER_OA/design1p1/HDL/DesignCompiler/digtop/sdc/hmr_constraints.sdc \
-def /db/HAMMER_OA/design1p0/HDL/FinalFiles/digtop/digtop_final_route.def \
-lef /db/pdkoa/.../vdio/lef/msl270_lbc7_tech_3layer.lef \
     /db/pdkoa/.../vdio/lef/msl270_lbc7_core_iso_2pin.lef  \
-mapscript mapping.tcl => This is optional and creates a mapping file which maps new eco cells with location aware spare/GA cells. this can be used in PnR, so that PnR doesn't have to do tedius process of mapping

#report eco changes -script -file test.script -replace => generates ECO inst set file that can be used directly in verplex(by using -script option) or EDI (by using -encounter option. it generates eco directive file)
report eco changes > eco_changes.rpt => reports eco chamges for each module (as new nets,instances,pins,etc)
write eco design -replace -newfile digtop_final_route_eco.v => new netlist can be tkdiff with old netlist to see differences.

exit

7. Now use the netlist generated above in Encounter to do place and route as in any eco flow. This new netlist above just has new instances added/deleted, but doesn't have the mapped spare cell isntance connection. This will be done in EDI. However, we will still need to modify the above netlist to add scan chain connection for any newly added flops.

---------------
DIFF the layout: After the ECO change, verify the differences to make sure, only desired metal layers got changed.
--------------
A. generate laff files for digital design for both 1p0 db and 1p1 db.

1. On cadence CIW (cmd/log window, NOT the lib mgr), goto TI_Utils->Translators->Physical->LAFF_out@TI. We get new pop up box (CDS2Laff).
2. Provide Form Template File name if you have any(.cds2laff.ctrl). This file has all the info in it, so that we don't need to type anything in boxes below. you need to load this file to save typing. .cds2laff.ctrl lools like this (resides in /proj/DRV9401/users/kagrawal/DRV9401 or anywhere else):
topcells
Hawkeye_digtop_1p1  digtop  layout
end
laffname   /sim/BENDER/pindi/assura/digtop_1p1.laff
nosystemlayers
layermap /data/pdk/lbc8/rev1/cdk446/4.6.a.19/doc/cds2laff.map
logfile /data/BENDER/users/pindi/BENDER/CDS2LAFF.LOG
signal label

If .cds2laff_1p1.ctrl file is not loaded, then run steps 3 to 7, and then save this template as .cds2laff.ctrl.
3. leave run dir as current (.).
4. choose cell type as "cellname" and Provide Library name (Hawkeye_digtop_1p1), cell name(digtop), view name (layout)
5. Provide Laff file name to write to (ex: digtop_1p1.laff).
6. choose layer map table => this comes from the pdk doc dir. Without this layer mappings will not be correct and we may get a "syntax error" on running difflay (for HPA07, it's /data/pdkoa/50hpa07/2012.11.13/cdk/itdb/doc/cds2laff.map)
6. choose signal type as "label". (leave "exclusive layer mapping" as ticked)
7. click "OK", and the digtop_1p1.laff file is generated in the dir mentioned. (choose yes and no for the first 2 options that pop up)

Repeat steps 1 thru 7 for digtop_1p0 design to generate digtop_1p0.laff (by changing .cds2laff_1p0.ctrl file appr)

NOTE: Look at CDS2LAFF.LOG file. At the very bottom, we should see 0 errors and 0 warnings.

B. Diff b/w digtop_1p0.laff and digtop_1p1.laff
Open difflayman tool (by typing difflayman on the unix command window). On the gui, provide the laff path for the 1p0 and 1p1 laff files, cell names as digtop, path to summary out file and log path file, and then click submit. A new window pops up. when it's done, then you can click on "view summary" to see what layers changed.

-------------------------------------