Breaking down FileMaker’s new (and really easy) way to implement machine learning.
With the release of FileMaker 19, an extremely easy way to implement very complex machine learning has been made available to MacOS and iOS FileMaker custom apps. Machine Learning offers the ability to automate “identification and analysis” processes by using trained machine learning “models”. With FileMaker, it’s now easy to script an entire decision tree with both machine learning and human interaction; Leading to very powerful workflows that can ensure a high quality of data.
These models can be implemented to replace tasks that usually require human interaction to complete. One example of this is Sentiment Analysis, which can be used to gauge the tone of an email and automatically escalate it if the tone is determined to be angry. Another use case would be the identification of an uploaded picture, to determine if the content is likely to match the requested picture content. There are many use cases that can lead to significant savings in time for human interaction.
Since FileMaker’s CoreML functionality takes advantage of recent iOS 11+ and MacOS 10.13+ updates that introduced the CoreML framework, FileMaker’s CoreML functionality is limited to iOS and MacOS as well. This should especially be taken into consideration for Windows and Linux FileMaker servers, as server-side scripts (Schedules and “Perform Script on Server”) will not be able to load CoreML functions).
So just how easy is it to implement these machine learning models with FileMaker? It’s just three easy steps:
Step 1: The Configure Machine Learning Model script step. You will utilize this script step to “install” a machine learning model, stored in a container field, into a named session object that can be used in Step 2. The script step has a number of parameters:
- Operation determines what action you are using to setup your model.
- The Vision option specifies that you will load a model that is designed to interact with images.
- The General option is used for non-image-based machine learning models, such as sentiment analysis of price estimation.
- The Unload option is used to unload the model from session memory when you are finished using it. This is important to do, as it frees up the resources that were being used by your custom app (like memory) to process the model. CoreML can potentially be resource-intensive, so be sure to unload the model when you are done with it.
- Name is the name of the model you are loading. This will be used in Step 2 to reference the loaded model by name. I recommend using a simple text name without special characters, spaces or numbers. For example: “checkImage” or “estimatePrice”.
- From indicates the container field that you are loading the trained machine learning model from. You can load container contents manually, or scripting it by using the Insert From URL script step.
Step 2: The ComputeModel() function. This new function is used to take a loaded machine learning model (by name) and passing it specific contents for machine learning processing. The first parameter to this function is modelName, which is provided from Step 1 above. Other parameters to this function are flexible by using [Key;Value] pair parameters (same as with other functions like Substitute() that support flexible parameters).
So a ComputeModel() function that passes three parameters for estimating the cost of a house may look like this:
ComputeModel( “estimatePrice” ; [ “zipCode” ; “43215” ] ; [ “bedrooms” ; 3 ] ; [ “bathrooms” ; 2 ] )
Or a function that passes an image for analysis may look something like this:
ComputeModel( “isThisAToaster” ; “image” ; MyTable::ImageField )
An additional note for Vision type CoreML models is that they also support two additional features that can limit the number of results returned. This is important as some machine learning models for image analysis can return thousands of results (for instance, if you had a vision model that returned the coordinates of all the green pixels in an image). These two additional parameters are:
- confidenceLowerLimit: A numeric value between 0 and 1.0. Confidence is a percent measurement of how sure the machine learning model is of the request. For example, you might get a confidence value of .98 when you pass a picture of a toaster to the above calculation. If you made this parameter 1.0, you may not get any results back, as most machine learning models will not return a 100% accurate match depending on the training methodology. This is extremely useful for filtering out low level matches though, saving you processing time later.
- returnAtLeastOne: This is a boolean (1 or zero). This parameter is only useful in conjunction with the above parameter. Just in case you set a high confidence filter and no results match, then the result with the highest confidence will be returned if this parameter is enabled.
Step 3: Evaluate the model results. For the majority of use cases, the above ComputeModel() function will be used in a Set Variable or Set Field script step so that the results of the machine learning analysis can be checked. Results from most machine learning models are returned as JSON, so it’s quite easy with FileMaker’s native JSON functions to process the model results.
For example, if I passed this calculation with my vision model to identify toasters in an image:
ComputeModel( “locateToasters” ; [“image” ; MyTable::ImageField] ; [“confidenceLowerLimit” ; 1] ; [“returnAtLeastOne” ; 1] )
I would get the result with the highest confidence similar to this:
And I would be able to create a simple script that marks the toaster location in my record:
Set Variable [ $result ; JSONGetElement( ComputeModel ( etc…) ; 0 ) ]
If [ JSONGetElement( $result ; “confidence” ) > .75 ]
Set Field [ MyTable::ToasterDetails ; “There is a toaster located in the “ & JSONGetElement( $result ; “location” ) & “ of the picture.” ]
That’s it! Easy machine learning integration with just a few steps in FileMaker 19! Do you have ideas for how you would use machine learning? Please comment below.