PT-SI:

PT-SI is PT with Signal Integrity. PT-SI is basically  timing tool with crosstalk (requires separate license of PT-SI, regular PT license won't work)

link for discussion of si_xtalk_delay_analysis_mode option in PT-SI:
https://solvnet.synopsys.com/retrieve/015943.html?otSearchResultSrc=advSearch&otSearchResultNumber=4&otPageNum=1

PT-SI runs thru these steps:
1. electrical filtering, where aggressor nets whose effects are too small to be significant, based on the calculated sizes of bump voltages on the victim nets re removed. You can specify the threshold level that determines which aggressor nets are filtered. If the bump height contribution of an aggressor on its victim net is very small (less than 0.00001 of the victim’s nominal voltage), this aggressor is automatically filtered.
2. After filtering, PT SI selects the initial set of nets to be analyzed for crosstalk effects from those not already eliminated by filtering. You can optionally specify that certain nets be included in, or excluded from, this initial selection set.
3. The next step is to perform delay calculation, taking into account the crosstalk effects on the selected nets. This step is just like ordinary timing analysis, but with the addition of crosstalk considerations. This step runs in 2 iterations:
I. For the initial delay calculation (using the initial set of selected nets), PrimeTime SI uses a conservative model that does not consider timing windows.
II. In the second and subsequent delay calculation iterations, PT SI considers timing windows, and removes from consideration any crosstalk delays that can never occur, based on the separation in time between the aggressor and victim transitions or the direction of the aggressor transition. The result is a more accurate, less pessimistic analysis of worst-case effects. By default only 2 iterations done, as these provide good results. This variable is used to set no. of iterations. si_xtalk_exit_on_max_iteration_count => default to 2

logical correlation for buffers and inverters is considered in PTSI. For ex, if there is an inverter and both i/p and o/p nets of buffer are aggressing to a net, than these switch in opposite dirn, cancelling the x coupling effect, and resulting in very small delta delay or noise effect.

PT-SI is same as normal flow, except that we have to enable SI. These are the steps:
1. set target lib, link lib same way. set op cond to ocv.
2. Enable PT-SI (if we want to run SI)
set si_enable_analysis TRUE

3. set parameter for xtalk analysis
#For xtalk, default is to calc max delta delay for all paths (all_paths).
#set si_xtalk_delay_analysis_mode
#all_paths -> Calculate Max delta delay for all path through victim net. Could be pessimistic for critical paths for 2 reasons: Firstly, switching region of the victim is derived from the early and late timing windows without considering the individual subwindows that constitute it. Therefore, this might include regions where there is no switching on the victim. Second, the entire on-chip variation of the path is considered, creating the effect of multiple paths even when only a single path exists, for example, in a chain of inverters.
#all_path_edges -> considers only the edges of transition on victim net. This eliminates false overlap due to timing window caused due to multiple paths, and results in more accurate xtalk delay.
# worst_path -> DEPRECATED. do not use. Calculate Max delta delay only for critical path through victim. Accurate for critical path but could be optimisitic for non-critical paths. We pick victim critical path, so victim window is discrete edge, and false overlap of timing window is eliminated.
# violating_path -> DEPRECATED. do not use. Calculate Max delta delay for worst path and all <0 slack paths (recommended)
set si_xtalk_delay_analysis_mode all_path_edges

4. read verilog and parasitics as normal.
read_verilog /db/DAYSTAR/NIGHTWALKER/design1p0/HDL/FinalFiles/digtop/digtop_final_route.v
current_design $TOP
link

read_parasitics -keep_capacitive_coupling -format spef /db/DAYSTAR/NIGHTWALKER/design1p0/HDL/FinalFiles/digtop/digtop_qrc_max_coupled.spef => -keep_capacitive_coupling is needed to preserve all coupling cap from spef file. else, they will be grounded, and we wont see any noise effect. NOTE: spef file used here should have been generated with coupling caps in it (they should not be grounded). In EDI, it's done by generating spef after doing "setExtractRCMode -coupled true".
report_annotated_parasitics -check => make sure that coupling cap is shown here

5. read sdc constraints, check_timing, and then report timing for setup/hold.
report_timing -crosstalk_delta -delay max|min -path full_clock_expanded -nets -capacitance -transition_time -max_paths 500 -slack_lesser 2.0 => reports delta delay due to noise in GBA (can use -cross also instead of -crosstalk_delta). dtrans col in report shows delta transition caused due to xtalk, while delta col shows delta delay caused.
report_timing -crosstalk_delta -pba_mode exhaustive -delay max|min -path full_clock_expanded -nets -capacitance -transition_time -nworst 1 -max_paths 50 -slack_lesser 0.2 => reports delta delay due to noise in PBA.

PBA mode improves noise delay significantly because of 3 reasons: (https://solvnet.synopsys.com/retrieve/012134.html?otSearchResultSrc=advSearch&otSearchResultNumber=6&otPageNum=1)
A. slew rate is improved.
B. only single victim edge is considered for a single path of victim. Aggressor still have windows, as they can have multiple paths, but this reduces the overlap b/w victim and aggressor, resulting in elimination of lot of false victim window.
C. CPRR is improved, resulting in hold time improvement.

NOTE: PBA can't be used in sdf, as sdf has single delay value associated with each cell (it's a graph based rep).

6. static noise analysis: noise related reports. It uses noise modling from .lib or estimates noise based on delays/slew.
PTSI uses the following order of precedence when choosing which noise immunity information to use:
1.Static noise immunity curve annotated using the set_noise_immunity_curve command
2.DC noise margin annotated using the set_noise_margin command
3.Arc-specific noise immunity curve from library
4.Pin-specific noise immunity curve from library
5.CCS noise model from library
6.DC noise margin from library

#bottleneck for xtalk delta delay
report_si_bottleneck -cost_type delta_delay -significant_digits 3 => determine the major victim nets or aggressor nets that are causing multiple violations. reports the nets having the highest “cost function”. Four different cost functions:
1. delta_delay – Lists the victim nets having the largest absolute delta delay, among all victim nets with less than a specified slack.
2. delta_delay_ratio – Lists the victim nets having the largest delta delay relative to stage delay, among all victim nets with less than a specified slack.
3. total_victim_delay_bump – Lists the victim nets having the largest sum of all unfiltered bump heights (as determined by the net attribute si_xtalk_bumps), irrespective of delta delay, among all victim nets with less than a specified slack.
4. delay_bump_per_aggressor – Lists the aggressor nets that cause crosstalk delay bumps on victim nets, listed in order according to the sum of all crosstalk delay bumps induced on affected victim nets, counting only those victim nets having less than a specified slack.
By default, the specified slack level is zero, which means that costs are associated with timing violations only. If there are no violations, there are no costs and the command does not return any nets.

#nets reported by the bottleneck cmd are investigated with this cmd.
report_delay_calcualtion -crosstalk -from -to => provides detailed information about crosstalk calculations for a particular victim net. It shows active aggressors, reason for inactive aggressor, delta delay/slew and victim analysis.
I - aggressor has Infinite arrival with respect to the victim
N - aggressor does not overlap for the worst case alignment

#update_timing => updates timing due to xtalk, after "what if" fixes are made using size_cell and set_coupling_separation.
#update_noise => detects functional errors resulting from the effects of crosstalk on steady-state nets.

report_si_double_switching => determine those victim nets with double-switch violations in the design. Double-switching errors such can cause incorrect circuit operation by false clocking on the inactive edge of a clock signal, by double clocking on the active edge of a clock signal, or glitch propagation through combinational logic.

#static noise analysis:
#set_noise_parameters -ignore_arrival -include_beyond_rails -enable_propagation -analysis_mode report_at_source | report_at_endpoint
#-ignore_arrival => causes the arrival window information of the aggressors to be ignored during the noise analysis. Therefore, the aggressors are assumed to be always overlapping to maximize the effect of coupled noise bump.
#-include_beyond_rails => By default, the analysis of noise above the high rail and below the low rail is disabled. This option, enables the analysis of noise beyond the high and low regions.
#-enable_propagation => Specifies whether or not to allow noise propagation. Propagated noise on a victim net is caused by noise at an input of the cell that is driving the victim net. PrimeTime SI can calculate propagated noise at a cell output, given the propagation characteristics of the cell, the noise bump at the cell input, and the load on the cell output.
#-analysis_mode report_at_source | report_at_endpoint => In report_at_source mode, viol are reported at the source of violations. In report_at_endpoint mode, violations are propagated through fanout and reported at endpoints. default value is report_at_source.

NOTE: no noise models needed, as default is "report_at_source" mode, where noise bumps are not propagated, but rather fixed at source. controlled by "set_noise_parameters".
set_noise_parameters -enable_propagation => noise propagated.

#set_noise_margin, set_noise_immunity_curve => Specifies the bump-height noise margins or 3 coefficient values for an input port of the design or an input pin of a library cell, that determine whether a noise bump of a given height at a cell input causes a logical failure at the cell output. noise immunity of cell is provided here.

#set_si_noise_analysis => Includes or excludes specified nets for crosstalk noise analysis.

check_noise => checks the design for the presence and validity of noise models at driver and load pins. No pins should be found w/o noise constraints i.e. The number of pins reported in the “none” row of the report must be zero.

update_noise => performs a noise analysis and updates the design with noise bump information using the aggressor timing windows previously determined by timing analysis.
report_noise -all_violators => generates a report on worst-case noise effects, including width, height, and noise slack. It also determine those victim nets with double-switch violations in the design. -all_violators reports only those pins/nets that have -ve noise slack (i.e noise bump is above noise threhold). To get more detailed report, use -verbose.

#report_noise_calculation => generates a detailed report on the calculation of the noise bump on a net arc (single net). same as report_delay_calculation except that it reports noise instead of delay. The startpoint is the driver pin or driver port of a victim net and the endpoint is a load pin or load port on the same net.

7. inlcuding/excluding certain nets: (if you still have failures), and run delay/noise analysis again.
#set_si_delay_analysis – Includes or excludes specified nets for crosstalk delay analysis.
#set_si_noise_analysis – Includes or excludes specified nets for crosstalk noise analysis.
#set_si_aggressor_exclusion – Excludes aggressor-to-aggressor nets that switch in the same direction. Only specified number of aggressors (default 1) is active at a time.
#set_coupling_separation – Excludes nets or net pairs from crosstalk delay and crosstalk noise analysis.

Black Friday 2020

This is an exclusive list of all BF 2020 deals. This year BF deals have been nothing short of hype with no delivery. I've found better deals on Laptop before BF. That's true for many other deals that used to take place every BF where you could get bunch of stuff for free after MIR (mail in rebate), but those deals have mostly disappeared for this year.So, my advice (which is always wrong) would be wait until after Christmas, and see if there are any markdowns for all the items these retailers have hoarded.

Anyway I'll list BF deals below. Not that they are at all time low prices, but still at decent prices. You still have extended return window for the holidays (mos retailers like Walmart, Target, BestBuy allow you to return items until Jan 2021), so it's relatively low risk. You can always return the item if you find it at a lower price later in Dec or Jan.

Best place to find all BF ads is from here (too much advertisement makes it hard to navigate yell)

https://blackfriday.com/

 

Deals:

 Some good deals summarized (more details in bottom part of page along with deals)

  • TV: 65 inch TV (lowest price $230),   70 inch TV (lowest price $300), 75 inch TV (lowest price $500, still close to 2019's price point), 
  • Laptop: 15.6 inch screen (lowest $250), gaming laptop (lowest $449)
  • HooverBoard: At target for $67.50 (cheapest price)

 


 

Best Buy:

https://blackfriday.com/ads/black-friday/best-buy

NOTE: All best buy purchase from 13th Oct, 2020 to 2nd Jan, 2021 are eligible for returns until Jan 2, 2020. So, even if the deal is not that great, you can still buy it as a backup. If you find a better deal at BestBuy or elsewhere, just return that and get the other one.

Some of the noteworthy deals are:

  • Hisense 65inch 4K TV for $250 (among cheapest 65 inch TV yet). Target has gotten it even chaper at $230 as of 11/06. See below in Target section.

 

 

 

 

 

 

 


 

Walmart:

NOTE: All walmart deals below are very hard to get online. Very few people on slickdeals have been able to snag these, so don't get your hopes high.

UPDATE: Looks like walmart is releasing these deals slowly every half hour or so. People are able to get all these online deals now by trying again and again over a day, as the deals keep on going in and out of stock. So, keep trying !!

For sale on 11/04/2020: https://blackfriday.com/ads/pre-black-friday/walmart

Some of the noteworthy deals are:

  • Onn 65 inch 4K TV ( 11/04/2020)= $228 (cheapest 65 inch TV). Target one is a better deal since TCL is a better brand than Walmart's Onn

 

 

 

For sale on 11/07/2020:

  • HP 15 inch gaming laptop for $449 =>same deal as above on 11/04. If you missed the one on 11/04, one more chance at it !!

 

For sale on 11/11/2020 (valid until 11/15/2020): https://blackfriday.com/ads/thanksgiving/walmart

Some of the noteworthy deals are:

 

 

 


 

Target:

This year Target really has some great deals. Their prices are even lower than Walmart's prices, and they don't sell out as Walmart online deals do.

Some of the noteworthy deals are:

 

 

 

 

 


 

Costco:

 NOTE: costco doesn't really have any Black friday deals. They have slightly better deals during BF than at other times, but nothing at ludicrous prices that sell out.

 


 

Fast food chains:

Below I'm listing some fast food chains and the vegetarian menu iems available at those chains, that me or my kids liked. These were reasonably priced, and filling enough and not too bad in taste I guess (since my kids finshed them w/o complaining) !! If you are looking for dine in formal restaurants, check out the "restaurant" section.

 

Mcdonalds:

A couple of options here. One of them is pie - Apple pie, pumpkin pie, etc. These sell for about $1 including tax. These are filling enough where 1 pie is good for one kid.

Macdonalds GC are not on sale that often. I've bough these GC on sale for 20% off at HEB, grocery chain in Texas. I usually buy enough that will last me a couple of years, considering I spend < $50 a year at Mcdonalds.

 

Chipotle:

Here you can get cheap mexican food. Nothing fancy about the food. It's filling and it costs < $10. You can buy Chipotle GC for 20% off couple of times a year at grocery stores, amazon, paypal, drug stores, etc. They also have BOGO offers from time to time. If you make an account at chipotle, and don't have any activity on your online account, then they will drop you few offers from time to time. I've gotten few BOGO offers that way. Sometimes they will have online quizzes and first 10K or so get BOGO offers (you can search for these on slickdeals). On an avg, BOGO offers are may be 1-2 times a year.

 

Subway:

Subway is one of the cheapest fast Food restaurant. They primarily serve sandwiches - veggie and meat. Their veggie sandwiches are really great.

Regularly you can get their GC for 20% off ($50 GC for $40). Couple of times a year, subway.com itself gives these out. Sometimes they are available on sale at grocery stores, paypal website, etc. On top of that, Subway usually has Buy 1, Get 1 Free (BOGO) offer. Normally their foot long sandwiches are $7, but with this promo, they give you 2 subway for the price of $7 or less. You also get 2 free cookies for filling survey. With 20% off GC, and rewards you earn + free cookies with survey, you end up getting two 1 foot subs for < $5. Not sure if you can make it that cheap at home. Make it 2X for all the fillings to make it twice as many calories. 2 footlong subs are good to feed 4 people, or < $1.25 per person per meal. Can't get any cheaper than that !! For the rewards, you will need to make an account at subway.com.

For vegetarian sandwiches - you can get veggie patties too to go in between the sandwiches. It will cost a dollar extra, but it's well worth the price. It tastes really good, and is very high in protein and calories - fills you up really well. Subway is my favorite fast food go to place, as it gives you the best value per dollar.

 

Dominos Pizza:

Dominos pizza is the cheapest Pizza chain (among the big pizza chain - Papa Johns, Pizza Hut and Dominos).

You can regularly get $25 Dominos GC for $20. They are available on sale at grocery stores, paypal website, etc. It's the most readily available sale of all the fast food GC sale. Dominos also has rewards program where you earn 10 points for every order $10 or more. 60 points gets you a free Medium 2 topping pizza. So, $60 in spending gets you $6 worth of pizza, or equivalent to 10% cashback.

For medium pizza, Dominos always has the deal: $5.99 $6.99 (prices raised in 2023) each when you buy 2, with 2 toppings each. You can also add more items for $5.99 $6.99 each (doesn't have to be pizza. this applies to even the first 2 items).

For large pizza, Dominos always has the deal: $7.99 for a large pizza with 3 toppings (as of 2022). Price for large pizza remained the same for 2023, but number of toppings has gone down from 3 to 1.

Below are different Pizza offered by Dominos:

1. Medium/Large Hand Tossed Pizza: This pizza is available in both medium and large.  It has 190 calories/slice. So total for 8 slices = 190*8=1500 calories per pizza. This is second favorite crust at my home after the "Crunchy Thin crust" pizza.

2. Medium/Large Crunchy Thin crust pizza: This pizza is available in both medium and large. This is very thin crust and crunchy. This is the best tasting pizza, but also has the fewest calories.

3. Medium Handmade pan pizza: This pizza is only available in medium as it's crust is very thick. It Costs $2 extra. So, medium pizza will cost $7.99 $8.99 (as of 2023). It has 310 calories/slice. So total for 8 slices = 310*8=2500 calories per pizza. It has significantly more calories, as it has much thicker crust. This is more filling but not as tasty.

 

Papa Johns Pizza:

Papa Johns pizza is the most popular pizza, but it's little bit more expensive than Dominos pizza.Papa Johns also has their reward program, where you get 1 point for every dollar spent. When you reach 75 points, you get $10 worth of dough that you can use on getting free pizza or anything under $10. So, $75 in spending gets you $10 worth of pizza, or equivalent to 15% cash back.

For medium pizza, Papa Johns always has the deal: $6.99 each when you buy 2, with 1 topping each. You can also add more items for $7 each (doesn't have to be pizza. this applies to even the first 2 items).

For large pizza, Papa Johns always has the deal: $8.99 for a large pizza with 1 topping. However this is location specific, and may be higher or lower.

Their large pizza is 2200 cal, while medium is 1500 cal (almost same as that at Dominos).

Some added bonus with Papa Johns pizza:

  • They include 1 garlic sauce + 1 banana pepper free with each veg pizza.
  • They give unlimited packets of cheese and pepper.
  • They have base which you may choose to be cheese, meat or veg. If you choose veg, the base itself will have vegetables in it (toppings that you choose will be in the base too). That seems to make it yummier.
  • The large pizza is $1 more expensive than Dominos, but when you consider the free items included, it's almost at par with Dominos.

 

Taco Bell:

Taco Bell is a very popular mexican food place. It used to be cheap, but is getting pricier now. Burritos, Nachos and Tacos are the most popular mexican food in these fast food places. Few popular veg menu items:

  • Cheese Quesadilla => It costs about $4 and has 500 calories. It's favorite of my younger one.
  • Veg Burrito => Burritos are things with veg/beans filled inside and wrapped around the tortilla. Few varieties are Bean burrito, rice and bean burrito and Fiesta Veggie burrito. Burritos are the cheapest items on Taco Bell's menu. They cost between $1-$2 and provide around 500 cal of energy.
  • Nachos => Nachos is one other very favorite mexican dish. It's basically chips with beans, cream, veg, etc spread on it. It tastes nice. However, it's expensive as it's not very filling, and costs about $5. Nachos Bell Grande costs about $5 and has 750 cal.
  • Tacos =>  Tacos are crsipy or soft shell filled with beans, veg, etc. Nacho Cheese Doritos Locos Tacos cost about $2.50 and has 170 cal, so per calorie it's expensive. Cheapest item here is "spicy potato soft taco" costing a dollar. Tacos in general do taste good.

Taco Bell has offers on their website all the time, where you can get a combo or a box with more items in it for a lower price. However, you get those prices, only when ordering thru the app. So, always look on their website or app for offers before ordering (check in offers/rewards section of their app). They also have free items from time to time.

 

IHOP:

Another fast food chain. It sells pancakes which are decent price. Their pancakes are very filling. They also have kids menu, which is cheaper. It's good chain when your kids are bored with everything else, and you just want to try something different. Tmobile thru it's Tmobile Tuesday app offers free pancakes at IHOP from time to time.

You can get IHOP GC for 20% off a couple of times in grocery stores, amazon, etc.

 

 


 

DEALS:

 

All Gift Card deals for fast food are in gift card section. Consider buying those GC where possible and then get these deals.

 

 

 


 

Subway Offer - BOGO (select restaurants) offer multiple times a year => expires every few months

https://slickdeals.net/f/16059892-subway-buy-one-footlong-get-one-free-w-code-ymmv

The deal always shows up with new promo code (or some older code). Every time it comes, it's valid for 1-2 months. Need to order online or from app. Best value is 2 foot long sub for $7 (Now $8 as of 2023). The ones with veggie patty in it go for about $10. You also get rewards if you have signed up, which equates to $2 every 4-5 of BOGO orders. Not bad :)

NOTE: Many codes of subway still work many times even though they expired a whileback. Try these codes if you can't find a code:

  • B2GO codes (3 Footlong): FTL1799
  • BOGO codes (2 Footlong): BOGOFTL, FREEFL, FLBOGO, FTLBOGO, FL1299, 1299FL, FL1399, FOOTLONG (need to add drink),
  • Individual codes (1 Footlong): FTL699,

Deals:

 


 

 

 

2025:

 

 


 

07/22/2025: Pizza Hut Offer - 1 Topping Personal Pan Pizza for $2 (limit 4) => every Tuesday

https://slickdeals.net/f/18475066-pizza-hut-select-locations-1-topping-personal-pan-pizza-2-valid-for-carryout-only-till-promotion-ends-tuesday-s-only?src=frontpage

Not all locations, and ones which have may be sold out quickly.

 


 

02/14/2025: 7-11 Offer - Free Slurpee every Friday in Feb (No app needed) => expires end of Feb

https://www.doctorofcredit.com/7-eleven-free-slurpee-friday-during-february-2025/

No app needed. Just walk in and get a small slurpee for free.

 


 

02/10/2025: Dominos Offer - Large 10 topping Pizza for $10 => expires 03/02/2025

https://slickdeals.net/f/18111496-domino-s-pizza-large-any-crust-any-toppings-pizza-10-valid-thru-3-2-for-online-purchase-only

Great deal after a long time. These 10 toppings themselves are worth $5.

 


 

01/07/2025: Taco bell Offer - BOGO or free item on app  => expires 01/13/2025

https://slickdeals.net/f/18042708-taco-bell-rewards-members-cravings-value-menu-b1g1-free-cantina-chicken-bowl-5-more-in-app-at-participating-locations?src=frontpage

There are multiple choices. "Love" which gives you a BOGO item for anything on "cravings menu" is the best deal, since you get $3 free item.  On app only. You have 7 days from the day you choose your offer to redeem it, so if you sign up on 01/13/25, you have until 01/20/25 to use it.

 


 

 

2024:

 

 


 

08/20/2024: Chipotle Offer - BOGO by answering quiz questions => valid from 08/20/2024 - 08/22/2024

https://slickdeals.net/f/17702445-chipotle-coupon-offer-burrito-bowl-salad-or-tacos-bogo-free-w-quiz-limited-availability-daily-thru-8-22

The answers to the quiz for each day are provided in the comments section. Offer is thru out the day, but BOGO is given every hour to first few folks, so try right on the hour. If you get only 25 points and no BOGO showed up, then you missed your chance as you will not be eligible any more for that day. Try again another day. The code is valid instore, online and from app.

 


 

08/10/2024: Cici's Pizza - Buffet for $5 only (valid on Mondays and Tuesdays only) => expires Nov 12, 2024

Valid on Dine in only on Mondays and Tuesdays. Great deal, since Buffet includes not only pizza, but also desert, salads, etc. You can also ask them to make you a custom pizza with your selected topping for no additional price. Do NOT waste please.

https://slickdeals.net/f/17683254-cicis-pizza-brings-back-4-99-buffet-on-mondays-and-tuesdays-until-november-12-2024-in-store-coupon-code-23063-show-coupon-before-ordering

 


 

07/01/2024: 7-Eleven Stores - Free small slurpee only on 07/11 (Thursday) => expires July 11, 2024

Valid only on 07/11. This offer comes every year on 07/11. No app requirement. Just show up in store and get one.  7-eleven is a gas station chain and may not have a presence in all the states.

https://www.doctorofcredit.com/7-eleven-free-small-slurpee-7-11-only/

 


 

04/28/2024: Taco Bell - Discovery box for $5, valid only on Tuesdays => expires June 4, 2024

Valid only on Tuesday.  3 different types of tacos included for $5. You can swap as well as customize to make it veg only (by substituting chicken with beans). Make sure you swap that basic crunchy taco for another Doritos Loco or even a soft taco supreme for free. Makes it an even better deal.

https://slickdeals.net/f/17449566-participating-taco-bell-restaurants-taco-discovery-box-5-tuesdays-only-through-june-4?src=frontpage

 


 

04/22/2024: Dominos Offer - Large 2 topping Pizza for $7 => expires 04/28/2024

https://slickdeals.net/f/17446341-domino-s-pizza-large-2-topping-pizza-6-99-carryout-only

 


 

01/24/2024: Dominos Offer - Large 2 topping Pizza for $7 => expires 01/28/2024

https://slickdeals.net/f/17248609-dominos-large-2-toppings-pizzas-6-99-carryout-only?src=frontpage

 



 

 

2023:

 

 


 

10/30/2023: Taco Bell - Free Doritos locos Tacos in app => expires Nov 5, 2023

Free Doritos locos tacos for everyone (only via the app). This in addition to the offer, where they offer a free Doritos locos tacos for guessing who will steal the first base in the 2023 World Series.

https://slickdeals.net/f/16970113-free-doritos-locos-taco-for-taco-bell-steal-a-base-steal-a-taco-promo-free?src=frontpage

 


 

10/11/2023: Dominos Offer - 1 Free Medium Pizza with online order of $7.99 or more (incl Tax) => expires 02/11/2024

https://slickdeals.net/f/16975834-domino-s-pizza-medium-2-topping-pizza-free-w-7-99-qualifying-order-valid-for-delivery-or-carryout

Plenty of time to redeem this offer, as it ends after 4 months. Even if you cancel the original order, you still get free medium pizza. However, it's limit of 1 per account. People try to get multiple of these offers by making multiple fake accounts.

 


 

10/03/2023: National Taco Day deals - Oct 4 only

Lots of BOGO and other cheap Taco offers on Oct 4.

Various Fast Food chains on Oct 4: https://slickdeals.net/f/16961173-national-taco-day-megathread-wednesday-october-4th

Taco Bell $10 monthly subscription for 1 free taco per day (sign up by Oct 4): https://slickdeals.net/f/16961758-national-taco-day-2023-taco-lover-s-pass-taco-bell-10

 


 

08/09/2023: Taco Bell - Free Doritos locos Tacos every Tuesday from 08/15 - 09/05:

Free Doritos locos tacos for everyone (only via the app, probably click on the offer under "offers"?).

https://www.doctorofcredit.com/taco-bell-free-taco-every-tuesday-8-15-9-5/

 


 

01/01/2023: Dominos Offer - $3 Tip for carryout order of $5 or more (incl Tax) => expires 03/26/2023

https://carryouttips.dominos.com/

This offer is back for 2023, It's same as what was offered last few times. You get $3 Tip for any order placed for > $5 (incl tax). You can order a large pizza with 1 topping (for $7.99). You will get $3 tip to use next week from Monday-Sunday. You can place next order for $7.99, and put this code in the PROMO section. It'll charge you $4.99+tax, and will also give you $3 tip to use on your next order. Thus every week, you can order 1 pizza for $4.99+tax until expiry date. Combined with 20% off for Dominos GC, You are getting a large pizza for ~$4, which is at par with what it would cost if you were to make one at home. Enjoy your Pizza !!

NOTE: Unless to live in one of the 5 states with no sales tax, you can order $7.99 large pizza for this offer. However, if you have no sales tax in your state, then your total will be $4.99 (after applying the tip), which will disqualify you from getting $3 Tip on your current order. In such a case, add an extra topping (for $1 more) or go with Brooklyn style extra large pizza (for $2 more).

 


 

 

2022:

 

 


 

11/03/2022: Subway Offer - BOGO (select restaurants): PROMO: FLBOGO => expiry unknown

https://slickdeals.net/f/16145170-select-subway-restaurants-buy-one-footlong-sub-get-one-footlong-sub-free

This offer is back with new promo code, It's same as what was offered last few times. Need to order online or from app. Best value is 2 footlong sub for $7.

 


 

10/31/2022: Taco Bell - Free Doritos locos Tacos (via app only): Expires 11/09/2022

Free Doritos locos tacos for everyone (only via the app, click on redeem offer under "offers").

https://www.doctorofcredit.com/taco-bell-free-taco-when-somebody-steals-a-base-in-the-world-series-2/

 


 

09/30/2022: Subway Offer - BOGO (select restaurants): PROMO: FREEFOOTLONG => expiry unknown

https://slickdeals.net/f/16059892-subway-buy-one-footlong-get-one-free-w-code-ymmv

This offer is back with new promo code, It's same as what was offered last few times. Need to order online or from app. Best value is 2 footlong sub for $7.

 


 

08/30/2022: Subway Offer - BOGO (select restaurants): PROMO: FREESUB => expiry unknown

https://slickdeals.net/f/16002784-select-subway-restaurants-buy-one-footlong-sub-get-one-footlong-sub-free-with-coupon-code-freesub

This offer is back with new promo code, It's same as what was offered last few times. Need to order online or from app. Best value is 2 footlong sub for $7.

 


 

08/22/2022: Chipotle Offer - BOGO by answering quiz questions => valid from 08/22/2022 - 08/26/2022

https://slickdeals.net/f/15993382-chipotle-burrito-bowl-salad-or-tacos-bogo-free-w-quiz-online-mobile-orders-only

The answers to the quiz for each day are provided in the comments section. You can retry infinite times if you don't get all 10 questions right. Offer start at 9AM PST each day and runs out of codes in 3-4 hours. Codes are valid for 7 days. You can get 1 code each day for each phone number provided. Need to order online or from app.

 


 

06/20/2022: Taco Bell - $5 box (online or via app): Expires 06/22/2022

This deal is for $5 box with chalupa, taco, twists and drink. An ok deal.

https://slickdeals.net/f/15846961-toasted-cheddar-chalupa-crunchy-taco-cinnamon-twists-fountain-drink-m-5

 


 

06/18/2022: Subway Offer - BOGO (select restaurants): PROMO: FREEFOOTLONG => expires 08/22/2022

https://slickdeals.net/f/15854488-new-subway-coupon-codes-for-june-buy-one-footlong-get-one-free-more-subway-via-app-purchase

This offer is back with same promo code, It's same as what was offered last few times. Need to order online or from app. Best value is 2 footlong sub for $7.

 


 

03/29/2022: Subway Offer - BOGO (select restaurants): PROMO: FREEFOOTLONG

https://slickdeals.net/f/15695377-select-subway-restaurants-5-99-footlong-7-99-meal-3-49-6-inch-5-99-meal-bogo-and-more

This offer is back, It's same as what was offered last few times. Need to order online or from app. Seems like fewer restaurants are participating in this promo code, so try few around you to see if it works for any.

 


 

12/31/2021: Subway Offer - BOGO: PROMO: FREEFOOTLONG => expires 02/13/2022

https://slickdeals.net/f/15532663-select-subway-restaurants-buy-one-footlong-sub-get-one-footlong-sub-free-more

This offer is back, after a gap of few months. It's same as what was offered last few times, and just like previous offers, runs for more than a month. It's buy one, get one free = effectively 2 subway for price of one. Can't beat the price. Need to order online or from app.

 


 

 

Various quotes here from all over the place:

Ancient Chinese Quotes on Youtube: https://www.youtube.com/watch?v=8vOojviWwRk

Three things never come back = Time, words and opportunity. So, never waste your time, choose your words, and never miss an opportunity - Confucius

There are 1000 lessons in a defeat, but only 1 in a victory - Confucius

Learn as if you are constantly lacking in knowledge, and as if you are constantly afraid of losing your knowledge - Confucius

Nature does not hurry, yet everything is accomplished - Lao Tzu

The one who asks a question is a fool for a moment but the one who never asks any is a fool for life - unknown

“If you only do what you can do, you will never be more than who you are” - Master Shifu

Strong minds discuss ideas, average minds discuss events, weak minds discuss people - Socrates

Smart people learn from everything and everyone, average people learn from their experiences, stupid people already have all their answers - Socrates

The only true wisdom is in knowing you know nothing - Socrates

The right question is usually more important than the right anwer - Plato

The person who says he knows what he thinks but cannot express it usually does not know what he thinks. — Mortimer Adler

 Courage is the ability to go from one failure to another without losing enthusiasm - Churchill

 There are two ways to conquer and enslave a nation. One is by the sword. The other is by debt. – John Adams 1826

As our circle of knowledge expands, so does the circumference of darkness surrounding it. ― Albert Einstein

 As areas of knowledge grow, so too do the perimeters of ignorance - Neil deGrasse Tyson

There is no greater education than one that is self-driven. — Neil deGrasse Tyson

The empire, long divided, must unite; long united, must divide  -- from Historic novel - Romance of the three kingdom. This simply states the obvious that Unity succeeds division and division follows unity. One is bound to be replaced by the other after a long span of time. This is the way with things in the world.

Difference between understanding and knowing - understanding is more important and thus the goal of learning.
 
"work expands so as to fill the time available for its completion" => Parkinson's Law
 
“EVERYONE IS A GENIUS! But if you judge a fish by its ability to climb a tree it will live its whole life believing that it is stupid.” — Albert Einstein.
 
“Education is not the learning of facts, but the training of the mind to think.”— Albert Einstein.
 
"We are better understood as a collection of minds in a single body than as having one mind per body" - Unknown. The gist of this is that when we are able to accept other people's view by keeping our minds open, we are going to go a long way towards being accepted and having constructive arguments.
 
"If you wish to make an apple pie from scratch, you must first invent the universe" - Carl Sagan
 
 Puzzle: Rich people need it. Poor people have it. If you eat it, you die. And when you die, you take it with you. What is it? => NOTHING

 A Woman's Loyalty is judged when the man has nothing And The Man's Loyalty is judged when he has everything => Somebody

 “Knowledge is not power. Knowledge is potential power. Execution of your knowledge is power” =>   Tony Robbins

"The only true wisdom is in knowing that you know nothing" => unknown

"If I owe you $1K, I've a problem, but if I owe you $1M, yo have a problem" => old saying

So many people spend their health gaining wealth, and then have to spend their wealth to regain their health. => unknown

 Taste success once, come once more => movie "83"

“What I cannot create, I do not understand.” – Richard Feynman

The best things in life we don’t choose — they choose us => Unknown

“All ideas are second-hand, consciously and unconsciously drawn from a million outside sources.” => Mark Twain

"Great genial power, one would almost say, consists in not being original at all; in being altogether receptive; in letting the world do all, and suffering the spirit of the hour to pass unobstructed through the mind." => Ralph Waldo Emerson

Optimism is Force Multilpier => Steve Ballmer's speech

“The stock market is a device for transferring money from the impatient to the patient” - Warren Buffett

"Innovation is taking two things that exist and putting them together in a new way" - Tom Freston

“The best time to plant a tree was 20 years ago. The second best time is now.” - Unknown

"The world is not driven by greed. It's driven by envy." - Charles T Munger (Warren Buffett's partner and a billionaire)

"The best thing a human being can do is to help another human being know more." - Charles T Munger 

If you don't find a way to make money while you sleep, you are going to work until you die - Warren Buffett

Three things that can make a smart person go broke => Liquor, Ladies and Leverage (the 3 L's LLL)  => Charlie Munger

Duniya mein itna gum hai, mera gum to kitna kum hai (There's so much sorrow in the world, that my sorrow is negligible) => Hindi Movie song

 "If you are not willing to learn, no one can help you. If you are determined to learn, no one can stop you." =>unknown

 There are only three ways a smart person can go broke: liquor, ladies and leverage => Charlie Munger (told by Warren Buffett in a shareholder meeting)

“My game in life was always to avoid all standard ways of failing, You teach me the wrong way to play poker and I will avoid it. You teach me the wrong way to do something else, I will avoid it. And, of course, I’ve avoided a lot, because I’m so cautious.” - Charlie Munger

The easiest person to fool is yourself => Richard Feynman (American Physicist)

Why is pizza made round, then packed in a square box but eaten as a triangle? => Unknown

 

 

Practical Aspects of Deep Learning: Course 2 - Week 1

This course goes over how to choose various parameters for your NN. Designing NN is very iterative process. We have to decide on the number of layers, number of hidden units in each layer, the learning rate to choose, what activation function to use for each layer, etc. Depending on the field or application where NN is being applied, these choices may vary a lot. The only way to find out what works, is to try a lot of possible combinations and see what works best.

 We looked at data set in ML, that typically is divided into training set and test set. We also have a dev set which is a set that we use to test out our various implementation of NN, and once we narrow it down to couple of NN that work best, we try those on test set to finally pick one. Training set is usually 99% of all data, while dev and test set are each small at 1% or less.

Bias and variance:

Underfitting: High Bias: Here training data doesn't fit too well with our ML implementation. The training set error is high, and dev set error is equally high. To resolve underfitting issues, we need NN with more layers, so that we can fit better

Overfitting: High variance: Here training data over fits with our implementation. The training set error is low, but dev set error is high. to resolve overfitting, we use more training data or use regularization schemes (discussed later).

Right fitting: Here data is neither under fitting nor over fitting.

Ideally we want low bias and low variance: implying training set error is low and dev set error is also low. Worst case is when we have high bias and high variance: implying training set error is high and dev set error is even higher, so out ML implementation did bad everywhere. We solve both issues of high bias and high variance by selecting our ML implementation carefully and then deploying additional tactics to reduce bias and variance.

In small data era, we used to do trade offs b/w bias and variance, as improving one worsened the other. But in big data era, we can reduce both bias and variance. Bias can be reduced by adding more layers to our network, while variance can be reduced by adding more training data.

Regularization: 

This is a technique used to reduce the problem of over fitting or high variance. The basic way we prevent over fitting is by spreading out weights, so that we don't allow over reliance on only a small set of weights. This makes our data fit less accurately, and by doing that it prevents over fitting. There are many techniques used to achieve this. Below are 2 of such techniques.

A. L1/L2 regularization:

This is done by lowering the overall weight values so that the weight terms are closer to 0. That way they have less of an impact. You can think of the new NN with lower weights as a reduced NN, where some of the weight terms in that network have vanished. Other way to see is that by having weights close to 0, our activation functions like sigmoid and tanh remain in liner region of their plot, so the whole NN becomes more like linear NN where we are just adding linear portions of all activation functions. This then becomes same as logistic regression, which is just a single layer linear NN.

To achieve regularization, we add sum of weights to the cost term, and try to minimize the new cost (including the weight terms). Then the cost lowering method, will try to keep weights also low so that overall sum of weights remain low. There are 2 types of Regularization:

L1 Regularization: Here we add modulus of weights to cost function:

For Logistic Regression: J(w,b) = 1/m * ∑ L(...) + λ/(2m) ∑ |w| = 1/m * ∑ L(...) + λ/(2m).||w|| , where w is summed over all i/p (i=1 to i=nx).

For L layer NN: Here w is a matrix for each layer. Here Regularization term added is λ/(2m) ∑ ||w[l]|| where we sum over all layers (layer 1 to layer L), adding all weight terms in matrix of each layer. i.e

||w[l]|| =   ∑ ∑ |w[l]i,j| where i=1 to n[l-1], j=1 to n[l], => all terms of matrix are added together (in L layer NN, dim of w[l] is (n[l], n[l-1])

L2  Regularization: Here we add modulus of square of weights to cost function:

For Logistic Regression:  J(w,b) = 1/m * ∑ L(...) + λ/(2m) ∑ |w|2 = 1/m * ∑ L(...) + λ/(2m) ||w||2, where ||w||2 is w.wT over all i/p (i=1 to i=nx).

For L layer NN: This is same as for L1 regularization, except that we do square of each weight term. Here Regularization term added is λ/(2m) ∑ ||(w[l])2|| where we sum over all layers (layer 1 to layer L), adding squares of all weight terms in matrix of each layer. i.e

||(w[l])2||  =   ∑ ∑ (w[l]i,j)2 where i=1 to n[l-1], j=1 to n[l], => all terms of matrix are squared and then added together. This is known as Frobenious norm instead of L2 normalization for historical reasons. L2 normalization is used when dealing with single summation as in Logistic Regression

 When calculating dw[l] (i.e dJ/dw) for L layer NN, we need to differentiate this extra term also. So, it adds an extra term λ/(m).w[l]. Then we updating w[l] = w[l] - α.dw[l] , we now have this extra term. So, w[l] = w[l] - α.(dw[l] + λ/(m).w[l]), where dw[l] refers to original dw[l] that was there before the regularization.

So, new w[l] = (1- α.λ/(m)).w[l] - α.(dw[l]) => So, we see that eqn remains of same form as earlier, except that w gets multiplied by a factor (1- α.λ/(m)). Since this  factor is less than 1, so weights are reduced from their original values. This is why L2 regularization is also called "weight decay", as we are kind of decaying weights by added L2 regularization.

λ = It's called as regularization parameter. It's another hyper parameter that needs to be tuned to see what works best for a given NN. lambda is a reserved keyword in python, so instead of using lambda, we use lambd as variable name for lambda

NOTE: in both cases above, we don't sum "b" (i.e we don't do + λ/(2m).b or + λ/(2m).b2) as that has negligible impact on reducing over fitting.

B. Dropout Regularization:

Here, we achieve regularization by dropping out weight terms, w, randomly on each iteration of cost optimization. This causes our algorithm to not depend on any weight term or a set of weight terms very heavily, since that term may disappear at any time, during any iteration of optimization. This causes the weights to be more evenly distributed, reducing the problem of over fitting. It may seem hanky-panky kind of scheme, but it works well in practice.

Inverted Dropout: A revised and more effective implementation of Dropout is inverted Dropout, where we multiply the activation values appropriately, so that our activation values remain unchanged, irrespective of how many hidden units we dropped.

NOTE: Dropout regularization is always applied only on training data, and NOT on test data. This is obvious, since once the weights are finalized by running dropout on training data, we have to use all those weights on test data.

C. Other Regularization:

1. data augmentation: We'll always achieve better regularization with more data. Instead of getting more data, we can use existing data to augment our triaing set. This can be done by using mirror images of pictures, zoomed in pictures, rotated pictures, etc.

2. Early stopping: This is another approach where we stop our cost optimization loop after a certain number of iteration, instead of allowing it to go for a large number of iterations. This reduces over fitting. L2 regularization is preferred over early stopping, as you can mostly get same oe better variance with L2 regularization than with early stopping.

Normalize inputs:

We normalize input vector x by subtracting it by mean, and dividing it by std deviation (or sq root of variance).

So, Xnormal = (Xorig -µ) / σ where mean (µ) = 1/m * Σ X(i)orig where we sum m samples for each X, and std deviation (σ ) = 1/m *√ ( Σ (X(i)orig- µ)2)

If there are 5 i/p vectors X1,...,X5, then we do this for each of the 5 vectors for all m examples. This helps, as subtracting i/p vectors by mean centers each X around origin. Similarly dividing it by std deviation, normalizes it so that for each dimention, X is scattered around by same range for all dimensions. This makes our i/p vector X more symmetrical, and so finding optimal cost goes more smoothly and faster.

Vanishing/Exploding gradients:

With very deep NN, we have the problem of vaishing or exploding gradients, i.e gradients become too small or too big. Prof Andrew shows it with an example, on how the final weight matrix becomes a matrix with exponent of "L". So, values greater than 1 in weight matrix, start exploding, while values less than 1 start vanishing (as they start going to 0). One of the ways to partially solve this is to initialize the weight matrix correctly.

Initializing Weight matrix:

For any layer l, o/p Z = w1.x1+....+wn.xn. If the number of weights is large, then we want weights w1..wn to be small, so that Z doesn't becomes too large. so, we divide each weight matrix element by n (In reality, we divide it by square root of n). This ensures that our weight elements don't get too big. Initializing to "0" doesn't work, as it's not able to break symmetry.

For random initialization, we multiply as follows:

1. tanh activation function: For tanh. it's called Xavier init, and is done as follows: W[l] = np.random.randn(shape) * np.sqrt(1/n[l-1]). We use size of layer (l-1) instead of "l", since we divide it by input layer size, and size for i/p of layer "l" is n[l-1].

2. Relu activation function. For Relu, it's observed that np.sqrt(2/n[l-1]) works better.

3. Others: Many other variants can be used, and we'll have to just try and see what works best.

Gradient Checking:

Definition of differentiaition of X: dF(x) = Lim(e->0) F(x+e) - F(x-e) / 2e, where e goes to 0 in limiting case.

We use this definition to check for gradient by comparing the value obtained using eqn above compared to the real gradient calculated using our formula. If the difference is large (i.e > 0.001), then we need to doubt the gradients of dw and db calculated using the pgm.

 

Programming Assignment 1: here we learn about how different initialization to weight matrix, results in totally different training accuracy.  We apply our different init mechanism to 3 layer NN:

  • zero initialization: doesn't work, unable to break symmetry. Gives worst accuracy on training set
  • large random initialization: very large weights cause vanishing/exploding gradient problem, so gives poor accuracy on training set.
  • He initialization: This works perfect, as weights are divided by "n" to have lower initial weights, resulting in very high training accuracy

Here's the link to pgm assigment:

Initialization(1).html

This project has 2 python pgm.

A. init_utils.py => this pgm defines various functions similar to what we used in previous assignments

init_utils.py

B. test_cr2_wk1_ex1.py => This pgm calls functions in init_utils. It does all 3 initialization as discussed above. We unknowingly did He initialization in previous week exercise.

test_cr2_wk1_ex1.py

 

Programming Assignment 2: here we use the same 3 layer NN as above. Now we apply different regularization techniques to see which works best. These are the 3 different  regularization applied.

  • No regularization: here test accuracy is lower than training accuracy. This is due to overfitting. Gives high accuracy on training set, but low accuracy on test set
  • L2 regularization: here we apply L2 regularization, which results in lower accuracy on training set, but better accuracy on test set. The parameter lambda can be tuned to achieve higher smoothing or lower smoothing of fitting curve. Very high lambda can result in under fitting, resulting in high bias.
  • Dropout regularization: This works best as we get lower training accuracy, but highest test accuracy.

Here's the link to pgm assigment:

Regularization_v2a.html

This project has 3 python pgm.

A. testCases.py => There are bunch of testcases here to test your functions as you write them. In my pgm, I've them turned off.

testCases.py

B. reg_utils.py => this pgm defines various functions similar to what we used in previous assignments.

reg_utils.py

C. test_cr2_wk1_ex2.py => This pgm calls functions in reg_utils. It does all 3 regularization discussed above (inluding no regularization)

test_cr2_wk1_ex2.py

 

Programming Assignment 3: here we employ the technique of gradient checking to find out if our back propagation is computing gradient correctly. This is an optional exercise that can be omitted, as it's not really needed in further AI courses.

Here's the link to pgm assigment:

Gradient+Checking+v1.html

This project has 3 python pgm.

A. testCases.py => There are bunch of testcases here to test your functions as you write them. In my pgm, I've them turned off.

testCases.py

B. gc_utils.py => this pgm defines various functions similar to what we used in previous assignments.

gc_utils.py

C. test_cr2_wk1_ex3.py => This pgm calls functions in gc_utils. It does the gradient checking

test_cr2_wk1_ex3.py