Plotting Sensor Data Sets

Hi,

I designed a LoRaWAN sensor that measures acceleration (X, Y, Z) for a given lapse of time (5s to 10s) with a sampling rate of 5ms. I need to send this data on the cloud to be analyzed and plotted.

Let’s only talk about X axis values. What I do is I split the data in chunks of 124 bytes and send these chunks via LoRa (DR2) through TTN to Tago.

The first thing I need to do is to wait for all transmissions to arrive (I send a termination transmission for that purpose) and recreate a single array with all the data on Tago. I am not sure how I can do that. Maybe append data in a bucket variable?

Second thing would be to plot it on a dashboard chart widget. How can I do that? There is no timestamp associated with the data points, so I would create a dummy array (1,2,3,4,5,…,N) for the horizontal axis of the chart.

Thanks!
Xavier

Hi @xavier,
If you need to do operations based on history data that was insert to your device, you will need to use analysis instead of the payload parser.

The main reason for this is that the payload parser doesn’t have access to any request route, which you would need to retrieve the previous data from your device and concatenate.

You can use an action to run an analysis each minute, and in the analysis you get the most recent data and apply your logic. Or you can trigger an analysis each time the data arrives to your device and take actions based on the payloads you’re receiving.

TagoIO have available a few snippets you can use, like “Min, Max and Average” which explains how to retrieve data from the device.


You can personalize the X axis of any chart. Just go to the Data Range and Format of the widget, click
in the X axis option and select to use Group. You’ll need to setup a variable to use as an X axis.

Here is another post in the community talking about how to do this.

Hi @vitor,

I managed to get just about what I wanted… or close!

I parsed the data correctly and wrote an analysis script after understanding how getData and filters were working. BTW, I was reading the SDK pages and I was wondering if there was more information/examples for all the tago SDK content? For example, it is not so obvious what the hard coded options are for the filters and how to use them beside looking at the code snippets.

Then, I created an action that triggered an analysis when a certain packet type was received (type ‘b’ or 0x62). Here’s the code:

const { Analysis, Device, Utils } = require("@tago-io/sdk");

// The function myAnalysis will run when you execute your analysis

async function myAnalysis(context) {

  // reads the values from the environment and saves it in the variable env_vars

  const env_vars = Utils.envToJson(context.environment);

  if (!env_vars.device_token) {

    return context.log("Missing device_token environment variable");

  }

  //context.log("Log test");

  const device = new Device({ token: env_vars.device_token });

  // create the filter options to get the data from TagoIO

  const filter_axis = {

    variable: "accel_axis",

    query: "last_item",

  };

  const filter_chunk = {

    variable: "chunk_no",

    query: "last_item",

  };

  const filter_data = {

    variable: "accel_data",

    query: "last_item",

  };

  const resultArray1 = await device.getData(filter_axis).catch(() => null);

  const resultArray2 = await device.getData(filter_chunk).catch(() => null);

  const resultArray3 = await device.getData(filter_data).catch(() => null);

  // Check if the array is not empty

  if (!resultArray1 || !resultArray1[0] || !resultArray2 || !resultArray2[0] || !resultArray3 || !resultArray3[0]){

    return context.log("Empty Array");

  }

  //context.log(resultArray1[0]);

  //context.log(resultArray2[0]);

  //context.log(resultArray3[0]);

  // query:last_item always returns only one value

  const axis = resultArray1[0].value;

  const chunk = resultArray2[0].value;

  const data = resultArray3[0].value;

  // print to the console at TagoIO

  context.log(axis);

  context.log(chunk);

  context.log(data);

  // Get acceleration data length. Divide by 4 since samples are int16

  var data_len = (data.length)/4;

  // Create a buffer from acceleration data chunk

  const data_buffer = Buffer.from(data, 'hex');

  // Create obj variable to store acceleration samples

  var obj = [];

  var sample;

  for(i=0;i<data_len;i++){

    sample = data_buffer.readInt16LE(2*i);

    //context.log(sample);

    // Update obj with acceleration data

    if(axis == 'x'){

      obj = [ 

      {variable: "accel_x_data", serie: (i + chunk*data_len), value: sample, unit: 'mg' },

      {variable: "accel_x_range", serie: (i + chunk*data_len), value: (i + chunk*data_len) },

      ];

    }

    else if(axis == 'y'){

      obj = [ 

      {variable: "accel_y_data", serie: (i + chunk*data_len), value: sample, unit: 'mg' },

      {variable: "accel_y_range", serie: (i + chunk*data_len), value: (i + chunk*data_len) },

      ];

    }

    else if(axis == 'z'){

      obj = [ 

      {variable: "accel_z_data", serie: (i + chunk*data_len), value: sample, unit: 'mg' },

      {variable: "accel_z_range", serie: (i + chunk*data_len), value: (i + chunk*data_len) },

      ];

    }

    else{

      context.log("Could not resolve axis...");

    }

    try {

      await device.sendData(obj);

      //context.log("Successfully Inserted");

    } catch (error) {

      context.log("Error when inserting:", error);

    }

  }

}

module.exports = new Analysis(myAnalysis);

// To run analysis on your machine (external)

// module.exports = new Analysis(myAnalysis, { token: "YOUR-TOKEN" });

BTW, I tried to get more than one variable at once, but I was not able to do so. I am pretty sure you can tell me how…

Using the other post you referred to in your last post, I created a new variable for plotting purposes and added the “serie” member to both variables. I was able to properly plot the data, but a problem came when I tried to process large chunks of data (I was troubleshooting with a 14 bytes chunk).

I have a int16 accel_x[1200] array to process with 20 x 120 bytes chunks and the LoRa packets come in with a 3 seconds interval. The first chunks go well and get plotted in real time, but after a couple chunks it seems like Tago cannot process them fast enough and the triggered analysis are kind of overlapping themselves (a new type ‘b’ packet comes before the last one has finished processing).

I then created a dynamic table to see the accel_x_range variable behavior and I saw that the values were not appearing in order (Ex.: 50, 65, 51, 66, 52, 67) as if there was 2 analysis instances feeding the bucket at the same time.

Is there a way to wait for one packet to be processed before processing the others (some kind of waiting room / buffer)??

Thanks!
Xavier

Hi xavier,
Answering your question about the SDK. You can check all functions in our documentation, which I believe you already did: https://js.sdk.tago.io/.

You can navigate through the classes and functions to understand what can be send. For Data Query, for example, you can check the fields here: https://js.sdk.tago.io/interfaces/dataquery.html
So, here you can see that in order to request more variables you must send the field variables instead of variable. And it must be an array of string.


Now about your second question, related to ordering the data on the dynamic table.
Serie is only used to group the data. Which you did correctly.

In order to order the data in any widget, you need to use the “time” parameter. You can check all the parameters allowed here: https://js.sdk.tago.io/interfaces/data.html. You can use the time parameter to make the data looks like it was posted at some specific time.
If you don’t have a timestamp in your data it can be a little tricky. There are two ways to post the data in a correct date/time: You can get from the scope (which you should have as you’re triggering the analysis trough an action) or from the data you’re collecting in your analysis trough getData (ex: resultArray1[0].time).


About your third question related to analysis queue.
All analysis at TagoIO runs in assynchronous way. That means it is not queue up for running. You can trigger the same analysis 100x time, and they will run all at same time and is completely possible that the last one to trigger can finish before the first one.

I hope that my answers helped you.

Hi @vitor,

Thank you for your answers.

I actually had tried Variables instead of Variable and inputting an array of strings, but I was unable to figure how to access the data from there. For example I tried to log the content of resultArray[0] and resultArray[1] and it was not what I expected. Can you share a quick example of doing it that way instead of call getData three times as I did?

That is understood for the “time” parameter, makes sense since all analysis are run asynchronously. I will try to get that timestamp from the scope.

One thing I still found that was weird is that I tried many times to process the chunks and in the end I did not get all the data in the bucket (1200 data points). I did four tests and they all ended up having different numbers like 675, 765, 589, etc… So that’s where I thought it may be a processing problem? Maybe having a sendData function in a for loop without delays or timeout?

What do you think?

Regards,
Xavier

Hi @xavier,
About the code, it’s very simple. When you request more than one variable, you will receive a response with all them in an array. That mean you need to find which one you want by searching in the array.

const data_list = await device.getData({ variables: ["accel_axis", "chunk_no", "accel_data"], qty: 1 });
if (!data_list.length) throw "error";

const accel_axis = data_list.find(data => data.variable === "accel_axis");
const chunk_no = data_list.find(data => data.variable === "chunk_no");
const accel_data= data_list.find(data => data.variable === "accel_data");
context.log(accel_data.value);
context.log(chunk_no.value);
context.log(accel_data.value);

About the issue you’re experiencing, I’m not sure if I understand it quite right. You mean that your code did run but you don’t see the results in your bucket? Did you check your data limit, input limit and output limit? Running in the limits for your account can result in weird behaviour for the scripts.

And just to help you with the scope. It is a parameter send directly to the analysis function, you must setup your function this way:

async function myAnalysis(context)
>>
async function myAnalysis(context, scope)

The scope behaves exatcly like any getData for a device. But it will always contain data tha triggered your analysis. You can check the content by just doing a context.log:

context.log(JSON.stringify(scope));

Hi @vitor,

Thanks for clarifications. I am new to node.js… did a lot of embedded in C so at least the syntax is similar!

I was running the free version, but I upgraded to the starter pack now. I will test if it changes something.

I did check that I was not going over 7000 inputs per hour, but I might have busted some inputs per minute limit maybe? Is that something I could see using the live inspector?

Is there a max inputs/min threshold?

Thanks,
Xavier

Hi @vitor,

I did some more test and I got a server response that told me “[2020-08-19 21:32:52] You have exceeded the maximum limit of Input (700/min)”, so there is such a threshold. Can that be increased via services?

Other than that I managed to fix the values order by adding the “time” parameter. The payload was timestamped in the payload parser, so that was easy to retrieve. Did not have to use the scope after all.

I tested by sending a sawtooth via the emulator and incrementing the chunk_no before every send.

Here’s without the “time” parameter:

Here’s with the “time” parameter:

The missing datapoints seems to be because of the threshold I reached (700/min). Would be nice if we could increase that one…

I also observed another weird behavior: Some analysis take much longer to complete. For example, I had the third sawtooth fully plotted about 10 seconds after all other sawtooth were plotted. It’s as if the third analysis that was triggered had a hard time to complete… Is there a way to fix this or is it normal?

Thanks!
Xavier

Hi @xavier,
You need to increase your data input limit by hour, as it also increases your data input limit by minute.
This information is best described here: https://docs.tago.io/en/articles/192

About one of your analysis being delayed, this is normal. Sometimes an analysis can take up to ~3 seconds to trigger, but if you’re using “time” parameter you shouldn’t have any issue.

Great! Thanks for you help!