Showing posts with label Serverless. Show all posts
Showing posts with label Serverless. Show all posts

8 Mar 2019

Serverless API with Azure Functions

In this post I am going to work on a pretty simple use case. While executing a deployment pipeline FlexDeploy may produce some human tasks that should be either approved or rejected. For example, someone has to approve a deployment to the production environment. It can be done either in FlexDeploy UI or with some external communication channels. Today I am going to focus on the scenario when a FlexDeploy human task is approved/rejected with Slack:


There are a few requirements and considerations that I would like to take into account:

  • I don't want to teach FlexDeploy to communicate with Slack
  • I don't want to provide Slack with the details of FlexDeploy API
  • I don't want to expose FlexDeploy API to public 
  • I do want to be able to easily change Slack to something different or add other communication tools without touching FlexDeploy
Basically, I want to decouple FlexDeploy from the details of the external communication mechanism. For that reason I am going to introduce an extra layer, an API between FlexDeploy and Slack. It looks like serverless paradigm is a very attractive approach to implement this API. Today I am going to build it with Azure Functions, because ... why not? 

So, technically, a poc version of the solution looks like this:

Once a new human task comes up, FlexDeploy notifies the serverless API about that providing an internal task id and task description. There is a function SaveTask that saves the provided task details along with a generated token (just some uid) to Azure Table storage. This token has an expiration time meaning that it should be used before that time to approve/reject the task.

const azure = require('azure-storage');
const uuidv1 = require('uuid/v1');

module.exports = async function (context, taskid) {   
    var tableSvc = azure.createTableService('my_account', 'my_key');
    var entGen = azure.TableUtilities.entityGenerator;
    var token = uuidv1();
    var tokenEntity = {
        PartitionKey: entGen.String('tokens'),
        RowKey: entGen.String(token),
        TaskId: entGen.String(taskid),
        dueDate: entGen.DateTime(new Date(Date.now() + 24 * 60 * 60 * 1000))
      };
     
      tableSvc.insertEntity('tokens',tokenEntity, function (error, result, response) { });

    return token; 
};


Having the token saved, the PostToSlack function is invoked posting a message to a Slack channel. SaveTask and PostToSlack functions are orchestrated into a durable function NotifyOnTask which is actually being invoked by FlexDeploy:
const df = require("durable-functions");

module.exports = df.orchestrator(function*(context){   
    var task = context.df.getInput()
    var token = yield context.df.callActivity("SaveTask",  task.taskid)
    return yield context.df.callActivity("PostToSlack",  {"token": token, "description": task.description})
});

The message in Slack contains two buttons to approve and reject the task.


The buttons refer to webhooks pointing to the ActionOnToken durable function:
const df = require("durable-functions");

module.exports = df.orchestrator(function*(context){   
    var input = context.df.getInput()
    var taskId = yield context.df.callActivity("GetTaskId",  input.token)
    if (input.action == 'approve') {
        yield context.df.callActivity("ApproveTask",  taskId)
    } else if (input.action == 'reject') {
        yield context.df.callActivity("RejectTask",  taskId)
    }
});


ActionOnToken invokes GetTaskId function retrieving task id from the storage by the given token:
const azure = require('azure-storage');

module.exports = async function (context, token) {
    var tableSvc = azure.createTableService('my_account', 'my_key');

    function queryTaskID(token) {
        return new Promise(function (resolve, reject) {
            tableSvc.retrieveEntity('tokens', 'tokens', token, 
             function (error, result, response) {
                if (error) {
                    reject(error)
                } else {
                    resolve(result)
                }
            });
        });
    }

    var tokenEntity = await queryTaskID(token);
    if (tokenEntity) {
        var dueDate = tokenEntity.dueDate._
        if (dueDate > Date.now()) {
            return tokenEntity.TaskId._
        }
    }
};
Having done that it either approves or rejects the task by invoking either ApproveTask or RejectTask functions.  These functions in their turn make corresponding calls to FlexDeploy REST API.
const request = require('sync-request');
const fd_url = 'http://dkrlp01.flexagon:8000';

module.exports = async function (context, taskid) {   
    var taskid = taskid;
    var res = request('PUT',
              fd_url+'/flexdeploy/rest/v1/tasks/approval/approve/'+taskid,{        
                    });

};

I could start developing my serverless application directly in the cloud on Azure Portal, but I decided to implement everything and play with it locally and move to the cloud later. The fact that I can do that, develop and test my functions locally is actually very cool, not every serverless platform gives you that feature. The only thing I have configured in the cloud is Azure Table storage account with a table to store my tokens and task details. 

A convenient way to start working with Azure Functions locally is to use Visual Studio Code as a development tool. I am working on Mac, so I have downloaded and installed a version for Mac OS X.   VS Code is all about extensions, for every technology you are working with you are installing one or a few extensions. Same is about Azure Functions. There is an extension for that:



Having done that, you are getting a new tab where you can create a new function application and start implementing your functions:


While configuring a new project the wizard is asking you to select a language you prefer to implement the functions with:


Even though I love Java, I have selected JavaScript because on top of regular functions I wanted to implement durable functions and they support C#, F# and JavaScript only. At the moment of writing this post JavaScript was the closest to me.

Th rest is as usual. You create functions, write the code, debug, test, fix, and all over again. You just click F5 and VS Code starts the entire application in debug mode for you:


When you start the application for the first time, VS Code will propose you to install the functions runtime on your computer if it is not there. So basically, assuming that you have on your laptop runtime of your preferred language (Node.js), you just need to have VS Code with the functions extension to start working with Azure Functions. It will do the rest of installations for you. 

So, once the application is started I can test it by invoking NotifyOnTask function which initiates the entire cycle:
curl -X POST --data '{"taskid":"8900","description":"DiPocket v.1.0.0.1 is about to be deployed to PROD"}'  -H "Content-type: application/json" http://localhost:7071/api/orchestrators/NotifyOnTask

The source code of the application is available on GitHub.

Well, the general opinion of Azure Functions so far is ... it is good. It just works. I didn't run into any annoying issue (so far) while implementing this solution (except some stupid mistakes that I made because I didn't read the manual carefully). I will definitely keep playing and posting on Azure Functions enriching and moving this solution to the cloud and, probably, implementing something different.

That's it!



31 Dec 2018

Conversational UI with Oracle Digital Assistant and Fn Project. Part II

In my previous post I implemented a conversational UI for FlexDeploy with Oracle Digital Assistant. Today I am going to enrich it with Fn Flow so that the chatbot accepts release name instead of id to create a snapshot. Having done that the conversation will sound more natural:

...
"Can you build a snapshot?" I asked.
"Sure, what release are you thinking of?"
"Olympics release"
"Created a snapshot for release Olympics" she reported.
...


The chatbot invokes Fn Flow passing the release name to it as an input. The flow invokes an Fn function to get id of the given release and then it invokes an Fn function calling FlexDeploy Rest API with that id.


So the createSnapshotFlow orchestrates two Fn functions in a chain. The one getting release id for the given name with FlexDeploy REST API:
fdk.handle(function (input) {
  var res = request('GET', fd_url + '/flexdeploy/rest/v1/release?releaseName=' + input, {
  });


  return JSON.parse(res.getBody('utf8'))[0].releaseId;
})

And the one creating a snapshot for the release id with the same API

fdk.handle(function (input) {
  var res = request('POST', fd_url + '/flexdeploy/rest/v1/releases/'+input+'/snapshot', {
    json: { action: 'createSnapshot' },
  });


  return JSON.parse(res.getBody('utf8'));
})

The core piece of this approach is Fn Flow. The Java code of createSnapshotFlow looks like this:

public class CreateSnapshotFlow {


 public byte[] createSnapshot(String input) {
   Flow flow = Flows.currentFlow();

    FlowFuture<byte[]> stage = flow
      //invoke checkreleasefn
      .invokeFunction("01D14PNT7ZNG8G00GZJ000000D", HttpMethod.POST,
                      Headers.emptyHeaders(), input.getBytes())
      .thenApply(HttpResponse::getBodyAsBytes)
      .thenCompose(releaseId -> flow.
                      //invoke createsnapshotfn
                     invokeFunction("01CXRE2PBANG8G00GZJ0000001", HttpMethod.POST,
                                    Headers.emptyHeaders(), releaseId))
      .thenApply(HttpResponse::getBodyAsBytes);

    return stage.get();
 }



Note, that the flow operates with function ids rather than function names. The list of all application functions with their ids can be retrieved with this command line:


Where odaapp is my Fn application.

That's it!

30 Nov 2018

Conversational UI with Oracle Digital Assistant and Fn Project

Here and there we see numerous predictions that pretty soon chatbots will play a key role in the communication between the users and their systems. I don't have a crystal ball and I don't want to wait for this "pretty soon", so I decided to make these prophecies come true now and see what it looks like.

A flagman product of the company I am working for is FlexDeploy which is a fully automated DevOps solutions. One of the most popular activities in FlexDeploy is creating a release snapshot that actually builds all deployable artifacts and deploys them across environments with a pipeline.
So, I decided to have some fun over the weekend and implemented a conversational UI for this operation where I am able to talk to FlexDeploy. Literally. At the end of my work my family saw me talking to my laptop and they could hear something like that:

  "Calypso!" I said.
  "Hi, how can I help you?" was the answer.
  "Not sure" I tested her.
  "You gotta be kidding me!" she got it.
  "Can you build a snapshot?" I asked.
  "Sure, what release are you thinking of?"
  "1001"
  "Created a snapshot for release 1001" she reported.
  "Thank you" 
  "Have a nice day" she said with relief.

So,  basically, I was going to implement the following diagram:


As a core component of my UI I used a brand new Oracle product Oracle Digital Assistant. I built a new skill capable of basic chatting and implemented a new custom component so my bot was able to invoke an http request to have the backend system create a snapshot.  The export of the skill FlexDeployBot along with Node.js source code of the custom component custombotcomponent is available on GitHub repo for this post.
I used my MacBook as a communication device capable of listening and speaking and I defined a webhook  channel for my bot so I can send messages to it and get callbacks with responses.

It looks simple and nice on the diagram above. The only thing is that I wanted to decouple the brain, my chatbot, from the details of the communication device and from the details of the installation/version of my back-end system FlexDeploy. I needed an intermediate API layer, a buffer, something to put between ODA and the outer world. It looks like Serverless Functions is a perfect fit for this job.
















As a serverless platform I used Fn Project. The beauty of it is that it's a container-native serverless platform, totally based on Docker containers and it can be easily run locally on my laptop (what I did for this post) or somewhere in the cloud, let's say on Oracle Kubernetes Engine.

Ok, let's get into the implementation details from left to right of the diagram.















So, the listener component, the ears, the one which recognizes my speech and converts it into text is implemented with Python:

The key code snippet of the component look like this (the full source code is available on GitHub):
r = sr.Recognizer()
mic = sr.Microphone()

with mic as source:
    r.energy_threshold = 2000

while True:  
    try:
        with mic as source: 
            audio = r.listen(source, phrase_time_limit=5)           
            transcript = r.recognize_google(audio)
            print(transcript)
            if active:
                requests.post(url = URL, data = transcript)
                time.sleep(5)
           
    except sr.UnknownValueError:
        print("Sorry, I don't understand you")

Why Python? There are plenty of available speech recognition libraries for Python, so you can play with them and choose the one which understands your accent better. I like Python.
So, once the listener recognizes my speech it invokes an Fn function passing the phrase as a request body.
The function sendToBotFn is implemented with Node.js:
function buildSignatureHeader(buf, channelSecretKey) {
    return 'sha256=' + buildSignature(buf, channelSecretKey);
}


function buildSignature(buf, channelSecretKey) {
   const hmac = crypto.createHmac('sha256', Buffer.from(channelSecretKey, 'utf8'));
   hmac.update(buf);
   return hmac.digest('hex');
}


function performRequest(headers, data) {
  var dataString = JSON.stringify(data);
 
  var options = {
   body: dataString,   
   headers: headers
  };
       
  request('POST', host+endpoint, options);             
}


function sendMessage(message) {
  let messagePayload = {
   type: 'text',
   text: message
  }

  let messageToBot = {
    userId: userId,
    messagePayload: messagePayload
  }

  let body = Buffer.from(JSON.stringify(messageToBot), 'utf8');
  let headers = {};
  headers['Content-Type'] = 'application/json; charset=utf-8';
  headers['X-Hub-Signature'] = buildSignatureHeader(body, channelKey);

  performRequest(headers, messageToBot);  
}


fdk.handle(function(input){ 
  sendMessage(input); 
  return input; 
})

Why Node.js? It's not because I like it. No. It's because Oracle documentation on implementing a custom web hook channel is referring to Node.js. They like it.

When the chatbot is responding it is invoking a webhook referring to an Fn function receiveFromBotFn running on my laptop.  I use ngrok tunnel to expose my Fn application listening to localhost:8080 to the Internet. The receiveFromBotFn function is also implemented with Node.js:
const fdk=require('@fnproject/fdk');
const request = require('sync-request');
const url = 'http://localhost:4390';
fdk.handle(function(input){  
    var sayItCall = request('POST', url,{
     body: input.messagePayload.text,
    });
  return input;
})
 
The function sends an http request to a simple web server running locally and listening to 4390 port.
I have to admit that it's really easy to implement stuff like that with Node.js. The web server uses Mac OS X native utility say to pronounce whatever comes in the request body:
var http = require('http');
const exec = require("child_process").exec
const request = require('sync-request');

http.createServer(function (req, res) {
      let body = '';
      req.on('data', chunk => {
          body += chunk.toString();
      });

      req.on('end', () => {       
          exec('say '+body, (error, stdout, stderr) => {
      });       
      res.end('ok');
     });

  res.end();

}).listen(4390);
In order to actually invoke the back-end to create a snapshot with FlexDeploy the chatbot invokes with the custombotcomponent an Fn function createSnapshotFn:
fdk.handle(function(input){
   
var res=request('POST',fd_url+'/flexdeploy/rest/v1/releases/'+input+'/snapshot',  {
      json: {action : 'createSnapshot'},
  });

  return JSON.parse(res.getBody('utf8'));
})

The function is simple, it just invokes FlexDeploy REST API to start building a snapshot for the given release. It is also implemented with Node.js, however I am going to rewrite it with Java. I love Java. Furthermore, instead of a simple function I am going to implement an Fn Flow that first checks if the given release exists and if it is valid and only after that it invokes the createSnapshotFn function for that release. In the next post.


That's it!



31 Jan 2018

Fn Function to build an Oracle ADF application

In one of my previous posts I described how to create a Docker container serving as a builder machine for ADF applications. Here I am going to show how to use this container as a function on Fn platform.

First of all let's update the container so that it meets requirements of a function, meaning that it can be invoked as a runnable binary accepting some arguments. In an empty folder I have created a Dockerfile (just a simple text file with this name) with the following content:

FROM efedorenko/adfbuilder
ENTRYPOINT ["xargs","mvn","package","-DoracleHome=/opt/Oracle_Home","-f"]

This file contains instructions for Docker on how to create a new Docker image out of existing one (efedorenko/adfbuilder from the previous post) and specifies an entry point, so that a container knows what to do once it has been initiated by the Docker run command. In this case whenever we run a container it executes Maven package goal for the pom file with the name fetched from stdin. This is important as Fn platform uses stdin/stdout for functions input/output as a standard approach.

In the same folder let's execute a command to build a new Docker image (fn_adfbuilder) out of our Docker file:

docker build -t efedorenko/fn_adfbuilder .

Now, if we run the container passing pom file name through stdin like this:

echo -n "/opt/MySampleApp/pom.xml" | docker run -i --rm efedorenko/fn_adfbuilder

The container will execute inside itself what we actually need:

mvn package -DoracleHome=/opt/Oracle_Home -f /opt/MySampleApp/pom.xml

Basically, having done that, we got a container acting as a function. It builds an application for the given pom file.

Let's use this function in Fn platform. The installation of Fn on your local machine is as easy as invoking a single command and described on GitHub Fn project page.  Once Fn is installed we can specify Docker registry where we store images of our functions-containers and start Fn server:

export FN_REGISTRY=efedorenko 
fn start

The next step is to create an Fn application which is going to use our awesome function:

fn apps create adfbuilderapp

For this newly created app we have to specify a route to our function-confiner, so that the application knows when and how to invoke it:

fn routes create --memory 1024 --timeout 3600 --type async adfbuilderapp /build efedorenko/fn_adfbuilder:latest

We have created a route saying that whenever /build resource is requested for adfbuilderapp, Fn platform should create a new Docker container basing on the latest version of fn_adfbuilder image from  efedorenko repository and run it granting with 1GB of memory and passing arguments to stdin (the default mode). Furthermore, since the building is a time/resource consuming job, we're going to invoke the function in async mode with an hour timeout.  Having the route created we are able to invoke the function with Fn Cli:

echo -n "/opt/MySampleApp/pom.xml" | fn call adfbuilderapp /build

or over http:

curl -d "/opt/MySampleApp/pom.xml" http://localhost:8080/r/adfbuilderapp/build

In both cases the platform will put the call in a queue (since it is async) and return the call id:

{"call_id":"01C5EJSJC847WK400000000000"}


The function is working now and we can check how it is going in a number of different ways. Since function invocation is just creating and running a Docker container, we can see it by getting a list of all running containers:


docker ps 

CONTAINER ID        IMAGE                               CREATED             STATUS                NAMES

6e69a067b714        efedorenko/fn_adfbuilder:latest     3 seconds ago       Up 2 seconds          01C5EJSJC847WK400000000000
e957cc54b638        fnproject/ui                        21 hours ago        Up 21 hours           clever_turing
68940f3f0136        fnproject/fnserver                  27 hours ago        Up 27 hours           fnserver



Fn has created a new container and used function call id as its name. We can attach our stdin/stdout to the container and see what is happening inside:

docker attach 01C5EJSJC847WK400000000000

Once the function has executed we can use Fn Rest API (or Fn Cli) to request information about the call:

http://localhost:8080/v1/apps/adfbuilderapp/calls/01C5EJSJC847WK400000000000

{"message":"Successfully loaded call","call":{"id":"01C5EJSJC847WK400000000000","status":"success","app_name":"adfbuilderapp","path":"/https/adfpractice-fedor.blogspot.com/build","completed_at":"2018-02-03T19:52:33.204Z","created_at":"2018-02-03T19:46:56.071Z","started_at":"2018-02-03T19:46:57.050Z","stats":[{"timestamp":"2018-02-03T19:46:58.189Z","metrics":
....





http://localhost:8080/v1/apps/adfbuilderapp/calls/01C5EJSJC847WK400000000000/log


{"message":"Successfully loaded log","log":{"call_id":"01C5EKA5Y747WK600000000000","log":"[INFO] Scanning for projects...\n[INFO] ------------------------------------------------------------------------\n[INFO] Reactor Build Order:\n[INFO] \n[INFO] Model\n[INFO] ViewController\n[INFO]
....



We can also monitor function calls in a fancy way by using Fn UI dashboard:



The result of our work is a function that builds ADF applications. The beauty of it is that the consumer of the function, the caller, just uses Rest API over http to get the application built and the caller does not care how and where this job will be done. But the caller knows for sure that computing resources will be utilized no longer than it is needed to get the job done.

Next time we'll try to orchestrate the function in Fn Flow.

That's it!