Somewhere in the overlap between software development, process improvement and psychology

software development

Serverless event feedback processing and analytics using #aws

I recently was involved in organising a techie conference. We wanted a feedback mechanism so I made a serverless event feedback system. Here’s how it works…

Have a bunch of ipads/tablets with a nice feedback form:

feedback_form

Collect up the feedback and sent it to a lambda function via API Gateway:


const APIurl = "https://api.mydomain.net/eventFeedback?";

//upload to AWS and start looking for results
function sendFeedback(){

    apiDelay = 500; //milliseconds - how long to wait before each check


    var learnedValue = $("input:radio[name ='learnedRadios']:checked").val();
    var awesomeValue = $("input:radio[name ='awesomeRadios']:checked").val();
    var commentsText = $('#mainInput').val();
    commentsText = encodeURIComponent(commentsText);
    
    var apiCall = APIurl + 'learned=' + learnedValue + '&awesome=' + awesomeValue + '&comments=' + commentsText;
    showSpinnyThing();

    $.get(apiCall, function(data) {

        //Got some data
        console.log('server returned');                    
        hideSpinnyThing();    
        $('#form_wrapper').hide();
        window.scrollTo(0,0);
        $('#feedbackSubmitted').show();
        setTimeout(function(){                        
            resetForm();
        },3000);
    });            

}

The lambda function does sentiment analysis, entity and keyword extraction on the text comments before sending it all off to ElasticSearch:

function detectSentiment(callback, responseObj) {

    var params = {
        LanguageCode: 'en',
        Text: responseObj.feedback.comments
    };
    comprehend.detectSentiment(params, function(err, data) {
        if (err) {
            console.log(err, err.stack);
        }
        else {
            responseObj.Sentiment = data;
        }
    });

}

(full code on github)

Then I configured a kibana dashboard to display the data, set it to auto-refresh and you’ve got serverless real-time event analytics.

event_feedback_dashboard

Oh yeah, and the whole thing took less than 6 hours to build (with the help of the frankly brilliant AWS Amplify hosting service)

Advertisements

Building a custom lambda runtime for anything? Even Pascal? Yes! #lambda #reinvent #aws

At AWS Reinvent 2018 today, Werner Vogels said it was now possible to use any language in AWS Lambda. I thought I’d put that to the test!

I thought it’d be interesting to add lambda support for Pascal, specifically the FreePascal variant found with Lazarus (the free, cross-platform version of Delphi). Mainly because it doesn’t really fit and it’s a compiled language, but I do have a remaining soft spot for the Lazarus project so I thought I’d give it a go.

Fair warning, this doesn’t make lots of sense. As a compiled language, you can’t initialise things in the runtime and then call them from multiple instances of functions, it also doesn’t deliver an amazing cold-start experience since it needs compiling on each run.

But… it does work! I based this on an AWS tutorial for creating a custom bash runtime.

You can write a Pascal lambda function like this:

begin
  writeln('{"status":"200", "message":"hello from fpc lambda"}');
end.

To do this, save the above as function.pas and then create an execution role:

To create an execution role

  1. Open the roles page in the IAM console.
  2. Choose Create role.
  3. Create a role with the following properties.
    • Trusted entityLambda.
    • PermissionsAWSLambdaBasicExecutionRole.
    • Role namelambda-role.

    The AWSLambdaBasicExecutionRole policy has the permissions that the function needs to write logs to CloudWatch Logs.

Then we can simply create a lambda function using the command line like so:

zip function.zip ./function.pas
aws lambda create-function --function-name fpc-hello --zip-file fileb://function.zip --handler function.handler --runtime provided --layers arn:aws:lambda:eu-west-1:743697633610:layer:fpc-runtime:20 --role arn:aws:iam:::role/lambda-role

This works because I’ve made the custom runtime layer public so anyone can use it.

But that doesn’t really do much, what about parsing and returning JSON? Easy…

uses
  fpjson, jsonparser, sysutils;

var
  lambdaEventData: TJSONData;
  lambdaEvent: TJSONObject;
  nameParameter, outputJSON: string;
begin
  //get incoming JSON and parse
  lambdaEventData := GetJSON(ParamStr(1));

  // cast as TJSONObject to make access easier
  lambdaEvent := TJSONObject(lambdaEventData);

  nameParameter := lambdaEvent.Get('name');
  outputJSON := format('{"status":"200", "message":"hello %s from fpc lambda"}',[nameParameter]);
  WriteLn(outputJSON);

end.

You can then invoke the function simply from the command line or AWS console:

aws lambda invoke --function-name fpc-lambda-event --payload '{"name":"Mike"}' response.txt
cat response.txt

Selection_023

How it works

The custom runtime is a simple linux executable that receives events from aws and publishes results. That means that things that are compiled on Amazon Linux 2 are generally going to work.

When lambda runs, it loads your custom layers into the /opt directory, so modifying the fpc.cfg and paths to take account of that is all that’s needed. This isn’t an especially lean implementation of fpc, it includes most libraries and a good chunk of the fcl.

Here’s my bootstrap code:


#!/bin/sh
set -euo pipefail

PATH="/opt/fpc3/bin:${PATH}"
export PATH

PPC_CONFIG_PATH="/opt/"
export PPC_CONFIG_PATH
#env
# Processing
while true
do
HEADERS="$(mktemp)"
# Get an event
EVENT_DATA=$(curl -sS -LD "$HEADERS" -X GET "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next")
REQUEST_ID=$(grep -Fi Lambda-Runtime-Aws-Request-Id "$HEADERS" | tr -d '[:space:]' | cut -d: -f2)

#RESPONSE=$(./function 'EVENT_DATA')

RESPONSE=$(instantfpc --set-cache=/tmp/ ./function.pas "$EVENT_DATA")

echo "fpc response: $RESPONSE"

# Send the response
curl -s -X POST "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/$REQUEST_ID/response" -d "$RESPONSE"
done

You can see the entirety of the code, including the runtime, FPC files and the 2 functions described above on github.


How to deploy static websites to S3 using AWS CodeCommit, CodeBuild and CodePipeline #aws

git repository -> S3 based website

  1. Create a git repository for your html/css/js files
  2. Add a buildspec.yml to tell AWS CodeBuild what to do
  3. Create a new CodeBuild project to do your builds
  4. Create a CodePipeline based on your CodeCommit repo that triggers whenever you push to the repo (skip the Deploy stage)

Push changes and sit back and watch the magic. (Note if you’re using CloudFront then you will need to wait to see the changes or invoke an invalidation – which you could also do as a post-build step!)

git_code_build

buildspec.yml (at root of repository)

version: 0.2

#env:
  #variables:
     # key: "value"
     # key: "value"
  #parameter-store:
     # key: "value"
     # key: "value"

phases:
  #install:
    #commands:
      # - command
      # - command
  #pre_build:
    #commands:
      # - command
      # - command
  build:
    commands:
      # - command
      # - command
  post_build:
    commands:
       - aws s3 sync . s3://[dest-bucket] --exclude .gitignore --exclude buildspec.yml --exclude .git/
      # - command
#artifacts:
  #files:
    # - location
    # - location
  #name: $(date +%Y-%m-%d)
  #discard-paths: yes
  #base-directory: location
#cache:
  #paths:
    # - paths<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>

Also, don’t forget to let your CodeBuild service role have the necessary permissions on S3 to avoid the build failing (it’ll need List, Get and Put permissions on the destination bucket).

If you wanted to kick-off a CloudFront invalidation as part of the build process you could add something like:

aws cloudfront create-invalidation --distribution-id [your_CF_ID] --paths /*.html

 

This is how the soft-practice.com serverless website works 🙂


How to use Lambda@Edge to redirect a CloudFront url #aws

You’ve got a published URL: http://mydomain/an_old_link and you’ve moved your site to AWS CloudFront and that published URL is causing you problems. The good news is that you can use Lambda@Edge to trap the incoming request and redirect to a file of your choice.

This means you can redirect http://mydomain/an_old_link to https://mydomain/a_new_link.html easily, and most importantly, transparently to the user.

This is useful because sometimes you’ve got old links published in paper media, or otherwise outside of your control.  CloudFront is great for http to https redirection, but it doesn’t do default files in subfolders 😦 Lambda@Edge can fix that problem.

Fist, create a lambda function (in us-east-1), it will be automatically replicated to edge locations through CloudFront.

Make sure it’s execution role can be assumed by “lambda.amazonaws.com” and “edgelambda.amazonaws.com”. Edit the “Trust Relationship” policy document of the function’s IAM execution role (not it’s permissions) to:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": [
          "lambda.amazonaws.com",
          "edgelambda.amazonaws.com"
        ]
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

The lambda function can then be triggered from CloudFront on a behaviour of “an_old_link” using an “Origin Request” trigger. That means the lambda function is only called when the object isn’t in the CloudFront cache, only when CloudFront reaches back to the origin for the content.  Which means this function isn’t called every time, just when  there’s a cache miss.

Here’s the lambda code.

exports.handler = (event, context, callback) => {
    const request = event.Records[0].cf.request;

    if (request.uri === '/an_old_link') {
        request.uri = '/a_new_link.html';
    }
    callback(null, request);
};

 


Forward commands from PHP to LightwaveRF

I write this little forwarder recently so that other bits of software in our house/network could send commands to our LightwaveRF kit without needing to be registered on the hub, effectively giving local speed access to all software integrations.

I’ve got a little web server running on a Pi, that machine is registered with the lightwaverf hub (it runs my Logitech Harmony integrations). That server can then be used to forward commands like this:


<html>
<head>
<title>MMD PI Lightwave Control</title>
</head>

<body>
<h2>RPI Lightwave Command forwarder</h2>
<?php

$lw_command = $_GET['command'];
$server_ip ='192.168.0.10';
$server_port = '9760';
$message = '666,!'.$lw_command;

$socket = socket_create(AF_INET, SOCK_DGRAM, SOL_UDP);
socket_sendto($socket, $message, strlen($message), 0, $server_ip, $server_port);

?>

<strong>Sent command: </strong> <?= $message ?>

</body>

</html>

Now other bits of software on web pages or native clients can simply call urls such as

http://myserver/lwrf.php?command=R3D3F1

(room 3 device 3 turn on)


How to call a webservice api from Amazon Alexa using Javascript in a Lambda function

This wasn’t trivial and I’ve seen a lot of questions about it online so I thought I’d share how I did this to get Alexa to respond to questions about my kids pocket money 🙂

You can call a URL like this:
(note I’m assuming a JSON response, you’d need to modify for non-JSON)

var http = require('http');

function getWebRequest(url,doWebRequestCallBack) {
    http.get(url, function (res) {
        var webResponseString = '';
        //console.log('Status Code: ' + res.statusCode);

        if (res.statusCode != 200) {
            doWebRequestCallBack(new Error("Non 200 Response"));
        }

        res.on('data', function (data) {
            webResponseString += data;
        });

        res.on('end', function () {
            //console.log('Got some data: '+ webResponseString);
            var webResponseObject = JSON.parse(webResponseString);
            if (webResponseObject.error) {
                //console.log("Web error: " + webResponseObject.error.message);
                doWebRequestCallBack(new Error(webResponseObject.error.message));
            } else {
                //console.log("web success");
                doWebRequestCallBack(null, webResponseObject);
            }
        });
    }).on('error', function (e) {
        //console.log("Communications error: " + e.message);
        doWebRequestCallBack(new Error(e.message));
    });
}

You can then use it like this from inside an intent handler:

function getPocketMoney(intent, session, callback) {

    let shouldEndSession = false;
    let repromptText = null;
    let speechOutput = '';
    let cardTitle = '';

    let url = "http://mypocketmoneyservice/getpocketmoney?person=mike";

    getWebRequest(url, function webResonseCallback(err, data) {
        if (err) {
            speechOutput = "Sorry I couldn't connect to the server: " + err;
        } else {
            //something like this
            const balance = data.pocketmoney.person.balance;
            speechOutput = `${person} has £${balance} pocket money.`;
            callback({}, buildSpeechletResponse(cardTitle, speechOutput, repromptText, shouldEndSession));
        }
    });
}

Howto: Controlling LightwaveRF lights with a Raspberry Pi, Flirc and Logitech Harmony One IR remote

I’ve recently got into home automation and so thought it would be fun to integrate my smart lighting with my media centre and remote control. When I watch TV or streaming video I want the main lights to go off and the side/back lights to come on in my living room. When I pause I want the main lights to come up a little and when I play I want main lights to turn off again 🙂

My setup

I use LightwaveRF devices to control my main lights (2 dimmers) and various side-lights and screen backlights, I’ve also got a LightwaveRF Link hub which allows the Lightwave’d lights to be controlled by app, but also by simple UDP packets.

I’ve got a Logitech Harmony One remote control, because it allows you to setup custom sequences and controls my TV, DVR box and sound bar.

My solution:

I decided to use a Raspberry Pi 3 (now with Wifi and Bluetooth built in!) to do the lightwave udp packets and  Flirc to interpret the IR signals and convert them into simple text commands. Then I wrote a little python program that listens to the incoming commands from the Flirc USB Infrared receiver. There are other ways of interpreting the IR commands, but this was a super simple one!

Harmony One to Lightwave RF integration (more…)