Friday, October 2, 2020

Share your Google Calendar with a desktop app (written in node.js) using Google Calendar APIs

Use Case:

Alex is a busy social person and stores his calendar on Google calendar. He wants to be able to look at the highlights of what is coming up in text form. He is a geek and prefers command line interaction to view his calendar (his favorite editor is vi). He also wants to share his calendar with Sally. She is also a co-geek and wants to use a terminal to look at Alex's schedule. Alex want to be able to


   1) share his calendar with his pal Sally and only her

   2) viewable from a command line (say a terminal on my Macbook)


Google has made this fairly easy to do with a tutorial and clear instructions from the Google calendar API-enabler interface. Here is a quick walk of the process to enable Google calendar API.





Enable Google Calendar API

To start the process to enable Alex's calendar to be viewable by Sally (and in text form), Alex logs into his Google account and heads over to his Google cloud account (https://console.developers.google.com). He creates a new project, gives it a project name of "gcx". He decided that he will write the Google calendar client aka app - the reader from which he will invoke from the terminal - in Node.js for Sally. He follows the process to "enable Google Calendar API". This step will create credentials.json - which will have 1) Client ID and 2) Client Secret. This will help Google to know whose Google services to target - it this case, it will be Alb's.


Node.js App

To create the Node.js Google calendar reader, he found a useful sample code.  He sends that to Sally to use. Before she invokes it with "node index.js", she needs to use npm to install the Google calendar API SDK by typing "npm install googleapis@39 --save". If Sally skipped this step,  she will run into "Error: Cannot find module 'googleapis'".


App Asking Permission to Alex's Calendar

The first time the reader is invoked, it will contact Google. Google notes that this is an unverified application (a desk application via NODE.js to be specific) into Alex's calendar. Google wants to make sure that Alex is really ok sharing his social calendar with this reader. Google asks the user of the reader (Sally) to send to Alex a specific just created authentication link. That link will ask for Alex to log into his Google account to authorize the calendar reader to read his calendar. 


Alex Give Permission

Because Alex trusts Sally, he visits the link and clicks [v] View, then [Allow] to allow Sally's reader to use Google calendar API to read his calendar. Alb will receive an authorization code that he sends to Sally.  Sally will enter the authorization code, and viola, she can read Alb's calendar, on a terminal!


Conclusion 

Alex stores his social calendar in his Google calendar. He wants a way to share it with people he trusts (Sally), in the format they like to view in it (text on a terminal). Google calendar API can make this happen easily. 



Sunday, September 13, 2020

Makefile : the forgotten Unix command for build automation

1. Introduction:


A software program (or services, a term used nowadays to describe software that runs in the cloud instead of your on your computer) is usually created from multiple separate source files, libraries, etc. Multiple source files are compiled IN ORDER, and libraries are linked in, to make a software program. The compile step requires the multiple separate source code files to be compiled in sequence. 


You change a source file, you run one command, and all needed associated source files that is impacted by the change in the source file is recompiled. The end result is new, updated version of the software program with the latest source code change(s).

There are multiple build automation software for this, such as Ant, Maven, Gradle, etc. But before diving into these automation software, you can actually learn on your Unix based laptop, using a Unix command called "make". 



2. The anatomy of a Makefile


The Unix "make" command has been available since 1976.  The simple idea behind make is that:


   1) you have a bunch of source files

   2) you know how to compile the source files into final code

   3) you know how to run the final code


You "codify" these into a text file called Makefile. I won't go into the syntax of a Makefile here, but rather focus on what it does for now. Using a Hello java program. Let's look at a Makefile :


   --- Makefile ---

   target : <tab> dependency 

   <tab> command


Decoding the Makefile:


   target: the compiled file, such as Hello.class; this of this as the output of the command

   dependency : the source code that the compiled file (target) depends on, such as Hello.java

   command: how do you compile the dependency (source file) into the target (compiled file), 

                     such as javac Hello.java


So for the above example:


   Hello.class:    Hello.java

                          javac Hello.java


This says Hello.class depends on Hello.java. If Hello.class is older than Hello.java, invoke the command "javac Hello.java" to compile and update Hello.class.


3. A Real Example:


Let's use a simple working Hello java example:

class Hello {

   public static void main (String [] sin) {

      System.out.println("Hello!");

   

   }

}

   1) you have a bunch of source files (say Hello.java)

   2) you know how to compile the source files into final code (javac Hello.java)

   3) you know how to run the final code (java Hello.class)


If you want to try out a simple Unix command line way of building a Java application (just prints Hello), you can follow along:



----- Makefile -----


go:     Hello.class     

        java Hello



Hello.class:    Hello.java

        javac Hello.java


clean:

        rm -f Hello.class



---- using make to build & run, for the very first time  ---

%make -n # let's see what make will do, but don't do it, could have used --dry-run

ac-a01:0HelloWorld chiangal$ make -n


javac Hello.java

java Hello

%make # compile, run, but this time, do it for reals

ac-a01:0HelloWorld chiangal$ make


javac Hello.java

java Hello

Hello


%make # this time only run

ac-a01:0HelloWorld chiangal$ make


java Hello

Hello


%vi Hello.java # modify the source file Hello.java by adding ! to Hello

class Hello {

   public static void main (String [] sin) {

      System.out.println("Hello!");


   }

}

%make # because the source code Hello.java has been changed, make will re-compile, run

chiangal-a01:0HelloWorld chiangal$ make


javac Hello.java

java Hello

Hello!


%make -f clean; ls Hello.class # everything works, let's clean up and go to sleep, but let's look at command first

ac-a01:0HelloWorld chiangal$ make -n clean; ls Hello.class 


rm -f Hello.class

Hello.class


%make clean; ls Hello.class # looks good, let's do it : remove generated files

ac-a01:0HelloWorld chiangal$ make clean; ls Hello.class 


rm -f Hello.class

ls: Hello.class: No such file or directory



4. Conclusion:

The unix "make" command is a good way to learn about build automation on your own laptop that supports some flavor of Unix (like MacOS). In one command "make", the Unix will read the Makefile that you have created, knows what the final program is (Hello.class), reads the Makefile to find dependencies on how to build Hello.class (depends on Hello.java, the command to compile it is javac Hello.java). 

Wednesday, July 8, 2020

Real Time Micro Marketplace Using IoT, AI/ML, Blockchain 2021

On my garage shelf is  a quart of special brake fluid that was very expensive,  still have a lot of it left, and it is close to expiring. Seems like a waste for it to sit on the shelf, expiring away, when someone nearby can use it. And by nearby, I don't just mean living nearby. It can also be a person who just happens to be passing my neighborhood,  broadcasting passively that he is in need of that special brake fluid.

That led me to think about the world of supply chain. In a simplified view the supply chain world, we can categorize actors as 1) producers of stuff & services 2) deliverer of stuff & services 3) consumers of stuff & services. Stuff are usually tangible assets, like cars, TVs, furniture. Services are usually intangible, such as hospital care, lawyer consultation, financial services. But that model of the world is changing.  The production of stuff kicks off with a plan. A demand driven planning is aggregated from sales forecasts, marketing activities,  foreseen competition movement, expected geo-political events, anticipated seasonality, production capacity, historical records, etc. A supply driven planning is aggregated from factory capacity, supplier capabilities, expected geo-political events, competition. But these plans usually don't just line up. Resulting in excess inventory or stock outs of products.

But there is a new world, where stuff & services are available on-the-fly (that is, without planning). It can be excess inventory - stuff that was produced but no on bought, an idle car passing by that is going exactly where you need to go,  a half quart of special brake fluid. A new marketplace is needed : a real time micro market. And to be able to create a marketplace that is so efficient that event a quart of the special brake fluid can be shared.

A trusted (blockchain) realtime (IoT) network so easy and valuable (AL/ML) to join, everyone will want to join.

So back to the special brake fluid example. I post on the real-time-micro-marketplace that I have a particular brand, product, amount, and price to offer. It is broadcast on the marketplace. It is accessible to not only those living close to me, but also to car passing by. What technologies are needed to make this work?

Internet of Things : real time sensors providing location, brake fluid level, squeal level,

Blockchain : asset and financial data stored here is authenticated, collected in real-time, cannot be altered, and trusted because it is distributed

AI/ML : predict demand for special brake fluid, recommend to those who might also own a similar car, suggest pricing in accordance with expiration date

Cloud : to provide IoT, Blockchain, and AI/ML as a services (meaning you don't  need to buy hardware, plug the hardware into a power socket,  install software on it, connected it to the network, and maintain it 24/7)

In conclusion, current products & services are built on traditional planning and fulfillment systems. But in a world there is much value left on the table for products & services that became available on-the-fly, a new system is needed to connect the producers to the consumers. A real time micro marketplace can make this happen, thanks to the power of IoT, blockchain, and AI/ML.

Monday, June 29, 2020

Application Programming Interface (API) Introduction

Introduction

Application Programming Interface (API) is a communication mechanism that allows one program (let's call it client) to call another program (let's call it server). The reason a client program will call another program is that the client program lacks a certain functionality needed to be a complete program - but that certain functionality can be found in another program - in the server program. For the client program to know how to call the server program, the server program will have instructions on what information (request) needs be passed to the server program, and what information will be returned from the server program back to the client program. The instructions for how the client program can make a request to the server program is called an API. API isn't a recent invention.  Before the explosion of the internet (using HTTP for programs to talk to each other), a client program (usually written in C/C++ or Java) can call libraries located remotely via remote procedure call (RPC). The remote libraries can be another stand alone server program on the same machine, waiting for a call from the client program. RPC is considered one of the ways to implement a inter-process communication (IRC).

With the advent of the internet (which focuses on networking) and its sister term cloud (which adds compute, storage, and application), the client program is now a mobile app or web browser. The server program is providing a service, is running on a computer (called a server) in another part of the world. The communication between the client and server talks through the cloud. A very common way for client and servers to talk to each other is via Hypertext Transfer Protocol (HTTP).


Hypertext Transfer Protocol (HTTP) and HTTP Request Verbs

Even as you read this on a web browser, your web browser has already used HTTP to talk to a web server.

   GET : retrieve data from the web server
   PUT : write data to a web server
   POST : write data to a web server, where the data will processed first
   DELETE : remove data from a web server


Representational State Transfer (REST) API

An API is often described as RESTful - Representational State Transfer.  What the heck does "Representational State" mean? In the simplest terms, it means "the value of a data field" on the server. "Transfer"  means that the value stored in the server is being read by the client. A "data field" and "value" is often found in  a database, attached to the server.  The database can be used to track a list of electronic products. Basic data field in this database of electronic products can include : product name, sku, brand, price. Zooming in on the data field called "product name", it will contain a value, say is "Bluetooth Headset", "Bluetooth Headset" represents the state (value)... of the data field "product". 



      +---------------------------------------------+
       |   one record
       |
       |      +----------------+-------------+
       |      |    Data Field   |     Value    |
       |      +----------------+-------------+
       |      |   SKU             | 1000006   |
       |      +----------------+-------------+
       |      |   Brand           |   Bose       |
       |      +----------------+-------------+
      +---------------------------------------------+



Tools

https://www.google.com/search?q=weather+in+mumbai&oq=weather+in+mumbai



Real Example of a API Call, Simple Version to List All Products:

Lets look at a real life example of how APIs are used. Bestbuy provides a service (a server program) to allow client programs to query the Bestbuy catalog online. Visit here.

   Bestbuy has a ton of products.

   Each product as a multiple attributes. Attributes is simple data that 1) describes the
   product (such as color), or 2) meta data (such as SKU). In real life, a Bestbuy product
   will have these and more attributes:  sku, productID, name, upc, categoryPath.

   You can use Bestbuy product API to show all products.

   Paste the command below into a web browser. You need to have applied for an
   apiKey before hand from Bestbuy. Here I replaced my key with "thisasecret123".
   In the place of this apiKey, USE YOUR OWN API KEY that you have applied
   from Bestbuy. If you prefer a terminal command line, you can use the "curl" program.

       https://api.bestbuy.com/v1/products?apiKey=thisasecret123&format=json


      Because I have a valid apiKey, I see results back from Bestbuy. In JSON format, as
      requested by me. Let's break this down:

         REST request : GET
         endpoint     : api.bestbuy.com/
         resource     : v1/products
         parameter   : ?apiKey=thisasecret123&format=json

      Looking at the results, the first of the tons of products is:

         sku:                1000006
         name:             "Spy Kids: All the Time in the World [Includes Digital Copy] [Blu-ray] [2011]"
         categoryPath[0]:   id: "cat00000",     name: "Best Buy"
         categoryPath[1]:   id: "abcat0600000", name: "Movies & Music"
         categoryPath[2]:   id: "cat02015",     name: "Movies & TV Shows"

  

Real Example of a API Call, Searching A Product by SKU:

   Let's say customer says "I want to buy sku=1000006. What is it"?

   You can use Bestbuy API to lookup a specific sku.

       GET api.bestbuy.com/v1/products(sku=1000006)?apiKey=thisasecret123&format=json

          REST request : GET 
          endpoint     : api.bestbuy.com/
          resource     : v1/products(sku=1000006) <- notice that resource has 
                                                                               term (sku=1000006) hard wired in via
                                                                               term : attribute(sku) operator(=) value(1000006)
          parameter     : ?apiKey=thisasecret123&format=json

       Results from the API call:

          "products": [
              {
                "sku": 1000006,
                "score": null,
                "productId": null,
                "name": "Spy Kids: All the Time in the World [Includes Digital Copy] [Blu-ray] [2011]",
              }



A Little Introspection About API Request Style


    It is interesting to note that there seems to be two styles of filtering a search : 1) built
    into resource & 2) filter passed in parameter


       1) GET api.bestbuy.com/v1/products(sku=1000006) apiKey=thisasecret123&format=json
                                                                   ^
                                                                   +---- filter built into resource

       2) GET api.openweathermap.org/data/2.5/weather?q=london&APPID=anothersecret456
                                                                                         ^
                                                                                         +---- filter passed in parameter 

Thursday, June 27, 2019

IoT And Blockchain – Finally The Right Technologies for Food Supply Chain Track & Trace



A Personal Experience with Food Safety – the Need for Track & Trace

It was 2013. I was staying in the posh Grand Hyatt Shanghai. As the March sun fought through the all too common afternoon haze, I sat in my room high above the city.  Looking down from above the 80th floor at a river nearby, I pondered what I wanted for dinner. Pork, for sure, should be on the list. Images of pork dishes rushed immediately to my head: pork dumplings, chasu, mapo tofu, twice cook pork, soy sauce braised pork, sweet-and-sour pork - all tantalized my appetite. But alas, pork was scratched off my list.  You might wonder why? Pork is a main staple in China – so much so that China ranks second in the world in terms of pork consumption by capita (https://www.pork.org/facts/stats/u-s-pork-exports/world-per-capita-pork-consumption/).  With so much consumption, they should really know their pork. So why no? The answer is: food safety.

I Love Pork – Just Not Today Please

Around the time of my visit, there were reports in the news media of dead pigs being dumped into the Huang Pu Jiang, a major river that ran through Shanghai (and the very same river that I was looking at from high up). As the hours passed, more details emerged. The dead pigs were being dumped illegally into this river by farmers to rid themselves of diseased pigs. What compounded matters and ultimately led me to delist pork was - these dead and diseased pigs were being plucked out of the water by unscrupulous opportunists, then resold as pork.  But China is not alone in food safety. The United Kingdom had to fight “mad cow” disease. The United States had multiple recalls of farm products such as romaine lettuce. So what can be done?


Track & Trace Needed For Food Safety, Authenticity, Conditions at Origin

Food safety requires knowing the entire history of food source, and hence a mechanism of track and trace is needed. Tracking is the process and technology needed to tag and record attributes of the food source, such as time, place, and temperature. Tracing is the process and technology needed to view the complete history of the tracked food, such as show the complete journey of the pork chop on your dinner table. The food path can be long – from farm to processor to storage to grocery – on a variety of transportation modes. This long path creates risk and vulnerability for the supply chain. Once track & trace is enabled, there are multiple benefits, including:

  1.  Food safety : Is the pork sourced from a reputable farm?
  2.  Authenticity: Is your Iberico Jamon real?
  3.  Conditions at origin: How do you know if your food is organic, cage-free, or grass-fed?



Challenges of Current Track & Trace Technologies

Track & trace process and technologies have been deployed for decades, including using RFID and serialization technologies. But to achieve true end-end track & trace, data needs to be collected, processed, and responded to in real time. Current track & trace technologies cannot do this for multiple reasons. Some systems are paper based, resulting in erroneous entries and delayed uploading. Some systems are not real time, resulting in urgent data that cannot be responded to in real time.  Separate systems provide different versions of the same data, requiring effort to mapping and/or merging, resulting in mistakes and delays.  


Emerging Technology to the Rescue: Tracking using IoT, Tracing using Blockchain

To address the short comings of exiting track & trace technologies, we can look at the combination of two emerging technologies to help : IoT and Blockchain. IoT is used help track all aspects of food, including place, time, temperature. The pencil and paper method of collecting data replaced with sensors that send error-free data in real time. Think of IoT as the eyes, ears, and nose of the track & trace system. In parallel, blockchain can be used as the central single data store to store the data collected by IoT. So instead of separate systems that stored its version of the truth at the time it deems convenient, data now will arrive in a timely, consistent manner. Think of blockchain as a ledger used to track all of the information gathered by IoT.


Current Examples of Success: Alpha Acid Brewing, World Bee Project Hive Network

There already are early adopters of blockchain and IoT for food track & trace. One example is IBM Food Trust, where you can find multiple types of food being tracked & traced, including coffee and precision agriculture.  Another example is Alpha Acid Brewing. Starting from the condition of the raw materials (hops, malt, and yeast), the entire beer chain is tracked using IoT and traced using blockchain.  In addition in aiding with creating quality beer, the blockchain can also enhance customer drinking experience with information (malt, yeast supplier) about the beer they are drinking.


Conclusion

The case for food supply chain track & trace is not new. However, the technology that currently supports it is old. With the emergence of blockchain, it is now time to finally solve this problem to improve food safety, verify authenticity, and validate the conditions at origin. 

Thursday, March 21, 2019

LAMP Stack Introduction

Abstract :  LAMP (Linux OS,  Apache Web Sever, MySQL Database, PHP Programming Language) is collection of software that work together to provide the backend web server service to a web browser client front end. The LAMP backend web server service uses Apache that can handle complex transactions such as talking to a MySQL database.




Initial Problem and UX Requirements (What):

Craigslist List is a web site where users can browse or search for goods or services to buy or sell. Basic requirements are:

Browse:

  • For Sale : browse things to buy 
  • Services : browse services to rent 
  • Location : localize browse things to buy or services to a region


Post:

  • For Sale : offer things to sell
  • Services : offer services to rent out





Screen Shot 2016-05-03 at 10.55.22 AM.png





Initial Problem and UX Requirements (How):


Once the initial UX requirements is done, it is time to create the specification architecture. Here is the initial web front-end (client) and back-end (server). The red part is the data model and flow. The blue part is the control.



lampp_project.JPG



Front End / Client  (HTML,CSS, JS)


In the screenshot below, you can see a LAMP stack running. The hosting OS is Windows 8. The guest OS is Ubuntu Linux. Ubuntu is running on Windows 8 via VMWare Workstation 12 Player (free). In Ubuntu, the LAMP software was installed - with includes Apache web server, MySQL database server, and PHP.  The client is a web browser, being served by the Apache web browser. The project is located in my home directory (~albert/lamp_class_project).


The starting client web page site is cll.html. There are two major functionalities implemented. One is the browse section such as Books, Lessons, Part-Time, Stockholm. The section functionality is New Post.





finalproj1.png


Server / Back End (PHP, MySQL)

In the HTML, when the  user clicks on a category to browse (say books), the category is linked to a PHP file. The PHP file will execute and it will open a connection to MySQL, submit a MySQL query, then print/echo the results (which will be rendered by HTML where the books link was clicked).



Screen Shot 2016-05-03 at 11.56.21 AM.png


Data Model and Database :


A simple naming convention helps to differentiate between tuple (row), attribute (column), and relation (table). Below, I created a database called "dbHW3".


Screen Shot 2016-05-03 at 12.05.07 PM.png





Data Pre-Population of Categories, Locations  (MySQL)

Here is a SQL command to pre-populate the Craiglist Lite database.

Screen Shot 2016-05-03 at 12.06.29 PM.png


Results:

We showed how a LAMP stack can be used to create a simple "Craigslist Lite" web site to allow a community to browse or search for goods or services to buy or to sell.

finalproj4.png

Sunday, February 3, 2019

Drained - Can Only Blog So Much

2018 was a very quiet year for me - in terms of personal blogging. The biggest reason was that I had to manage one of the micro blog sites at work, and doing that caused me to be creatively drained.  And being creatively drained is not a good start for writing a blog. And so this blog site went quiet.

But if you are curious what I wrote during that gap on my work site, visit:

   https://blogs.oracle.com/author/albert-chiang