This post is the second part to the previous post on evaluating the value of Dynamo scripts or computational BIM solutions in general. Even if you haven’t read that one, the present piece can be read as a standalone post since it is more technical and will show a few approaches to Dynamo script usage data collection.
We previously discussed the ways of quantifying value from Dynamo based automation solutions for BIM projects and general workflows. Not to break the nature of automation, the few examples presented here are intended to be introduced within Dynamo scripts for Revit (in principle, the process works for any software, just some parts of the presented Python scripts read info from Revit). The code will not affect the workings of the scripts they are included in, besides a negligible impact on run time since it will have to connect to an SQL data and write data.
A brief reminder of why automatic Dynamo usage collection can be beneficial to your AEC consultancies. In order to measure the impact of the developed Dynamo scripts on the company workflow, BIM project, on your teams, it is not enough to rely on user feedback sessions alone. Manual tracking of user activity and Dynamo script usage can quickly become a new bottleneck in streamlining the efficiency of your design teams. The process if not automated would quickly add up to the increasing number of tasks that AEC organisations need to keep up with, particularly in regard to the growing digitalization of the industry. We want a process that minimally intrudes into existing workflows.
Quantifiable metrics are necessary to overcome potential biases when collecting such feedback. A user will inevitably have his/her own preferences to how Dynamo scripts are developed and used. In a whole array of developed tools that may form a particular automation based workflow for your Revit teams, a number of tools could be used rather scarcely and not live up to their usage potential, thus not bringing in the desired outcome of your projects. Information on what is used and what is not can help improve the existing workflows that benefit the entire design consultancy. Tracking Dynamo script usage can enable a better understanding of the actual value in terms of specialist hours saved when compared to manual efforts.
This post will address two simple ways to store the gathered information from Dynamo scripts and projects that they were used on:
- Collecting data into text logs on individual user machines
- Collecting data into an SQL database
The Dynamo scripts and Python codes are provided for both in the following sections.
Collecting and storing Dynamo usage information into text logs
Recording all information within text files is the simplest to implement. However, with that come some drawbacks. Depending on the number of users within the company that would use Dynamo scripts, storing information in one central text log on some server could be problematic. The core issue is concurrent access requests which become possible if two or more users attempt to run Dynamo scripts at roughly the same time and the usage data collection part of the script would fail for one user. While minuscule, the time it takes to open up the log and record data to it, save it and close is can be long enough for multiple requests to conflict. This can become a greater risk in larger companies, where multiple offices work with the same remote servers, thus are more prone to latency, disk read and write access delays.
The following scenarios of when using text logs for data recording could be an option:
- The number of machines or users using Dynamo scripts is relatively small and the risk of simultaneous access of the text log found on the central server.
- The text log is intended to be stored locally on the user’s machine and only consolidated into some central log file or database at a later time. This way it does not matter if there are a single user or hundreds of users.
Ultimately, for future potential and scalability it is best to deploy a separate database for this data. But more on that further down this post.
No custom Dynamo nodes or external libraries are required to store the collected information inside text files. The default Dynamo installation with a few Python scripts is more than enough to handle this task, with maybe a few reservations for the type of information you want to collect. The presented instructions rely on Dynamo 2.x. Some information related to Dynamo might not be fully detailed since this post assumes that the user has some basic knowledge of Dynamo for Revit.
The first step is to decide what information is necessary and what data will be secondary. Our advice is to avoid collecting too much information and rely on as little data as possible. This helps with efficiency later and data analysis. Nonetheless, if there is a need to future proof your data, it is easy to include the new information at any point. A few variables that can be advantageous to collect are the Dynamo script information, the user, machine ID and Revit project ID when the script was used. This information could be used for support purposes, detecting issues with Dynamo scripts, Revit projects, etc.
For the examples, the following information is collected: Dynamo script ID (given manually), script file name, script version (also given manually), username, computer name, project file name, date and time.
Dynamo nodes that record and store this data can be placed separately from the main Dynamo script, meaning they have no effect on the main functionality of the script.
Only 8 nodes are used in this instance, among them is a single Python code node. The steps are described as follows:
- Specifying the location of the text log file. Note: it is best to keep it in a central location on a remote server that is backed up unless you want each user to store the log locally. The script does not create an empty text file, hence make sure to create a new text file with the desired name in your chosen location manually.
- Manual script identification information, which is not mandatory but having some identification and versioning can go a long way in the future.
- Python code for collecting other required information (code is shown a bit further in the post).
- The collected data is converted into strings
- This node defines the delimiter that is used to separate distinct data entries in the text log.
- Joins the data entry strings with the delimiter in between them.
- Adds a newline character at the end of the string, which makes sure that each new entry is on a separate line in the log file
- Writes the formatted string to the specified log file.
Most of the work is done by the Python code. While to some Python codes can seem intimidating, their usage greatly extends the default Dynamo capabilities. Not to mention that Python script usage can greatly reduce the time it takes to run Dynamo scripts that can be a significant hurdle for complex cases. In the present instance, the Python code enables us to acquire some info without relying on custom nodes. The entire code is presented below with comments after the # symbol. Feel free to simplify the code and modify for your own needs, the example is given mostly for clarity and should not be seen as good coding practices.
import clr import sys sys.path.append(r'C:\Program Files (x86)\IronPython 2.7\Lib') import os import getpass import socket from datetime import datetime clr.AddReference('RevitAPI') from Autodesk.Revit.DB import * clr.AddReference('RevitServices') import RevitServices from RevitServices.Persistence import DocumentManager clr.AddReference('DynamoRevitDS') import DynamoRevitDS # Gets the active document (opened Revit file) doc = DocumentManager.Instance.CurrentDBDocument # Obtains the title fo teh project file (filename) project_title = Document.Title.GetValue(doc) # Gets Dynamo instance so the active script can be recorded DynamoRevit = Dynamo.Applications.DynamoRevit() # Obtains active Dynamo script filename dyn_script = DynamoRevit.RevitDynamoModel.CurrentWorkspace.FileName dyn_script_fname = os.path.basename(dyn_script) # Obtains the username and computer name of the user who runs the script username = getpass.getuser() pcname = socket.gethostname() # Obtains the date and time at the moment the script was run date = datetime.date(datetime.now()) date = datetime.time(datetime.now()) # The output variable contains a list of all collected data OUT = [IN, dyn_script_fname, IN, username, pcname, project_title, date, time]
The result of running such a script with the nodes shown above will result in the following text log (sensitive information is pixelated). The data is presented line by line with individual data entries separated by the vertical bar |. The layout makes allows the data to be consolidated in the future without much effort. The order of data is determined in the OUT variable. Note: the conversion to strings could be inserted within the Python script, the present approach of using Dynamo nodes 5 and 6 are not set in stone.
Collecting and storing Dynamo usage information into an SQL database
In the present example, MySQL was used for the SQL database. It can easily be substituted for open source alternatives such as MariaDB or other databases. The first benefit of using a database is the centralized storage and the possibility of concurrent access to the data. Another aspect is the structured nature of such data, that can later be easily linked to various data visualisation and analysis dashboards such as Power BI. Even without any outside tools, the SQL database provides better options for data queries later on.
In this post, we will not go into depths on MySQL syntax and query structures, neither Dynamo or Python. The codes are provided as-is. Therefore, feel free to experiment with what is provided and modify things for your own needs as desired. The example uses MySQL 8.0, the workbench, Dynamo 2.x. A demo database is used for the following examples, the names, IP addresses, connection information should be modified as needed. The database was left empty, without any predefined tables so that we can do that within the Python script. However, that is not mandatory, the table can be specified manually with all the field info as necessary.
Another major component required for this workflow is an SQL connector to enable the interface with the database from Dynamo/Python. In our case, we are using the MySQL Connector/NET8.0. If you click on the name it will take you to the download location. The advantage of the NET connector is that everything needed is inside a single MySql.Data.dll library file. This means that it could be copied to a remote central server/storage location. Thus, the need for installing the connector on every individual user’s computer is bypassed as the DLL file can be imported with Python code.
After downloading the MySQL Connector/NET8.0 from the link provided above and installing it on your development machine, what is needed is the MySQL.Data.dll file from the installation folder, which by default should be in your program files directory. Afterwards, feel free to uninstall the connector.
Note: There are other alternatives to use as the SQL connector library, one preferred option would be to go for the Async MySQL Connector for .NET, but we won’t go into details here. Just know that there are options. Another thing to keep in mind is the that the Python version that comes with Revit is IronPython, hence there are some limitations compared to standard Python and a significant lack of available libraries for it as compared to regular Python.
Same as with text logs, just a few Dynamo nodes are needed. The same data is used for the text log case. The script is illustrated just below this paragraph. As you can see, it contains only 4 nodes, where the last one can be omitted since it records the status of the script run afterwards, whether it failed or succeeded. This last node does not affect the core Dynamo script, but it awaits for the entire script to be completed before it is run, hence the connection with the main Dynamo script.
For the sake of illustration, the script id and version were modified slightly (it is the same Dynamo script). These variables as before are manually provided and are passed into the Python code block that collects the remaining information. The second Python script does all the major work of creating the table, formatting the SQL query to insert the information into the database. The same Python code block also retrieves the primary key number and sends it to the last Python node, that updates the database with the status of the executed script.
The contents of the first Python block are identical to the code shown in the previous section. The code of the second Python block is provided below this paragraph. While the code can seem intimidating at first, it is actually quite simple. The core principle is simple, initially, the MySQL.Data.dll library needs to be linked. Afterwards, it is all about defining the SQL queries. In the present example, it is necessary to provide these queries as strings or multiline strings, denoted by the triple “ symbols. With the queries formed, we can open up a connection to the database using the login data and execute these queries. Finally, at the end, we retrieve the primary key, a unique identifier of the data row inserted into the table.
import clr import sys sys.path.append(r’C:\Program Files (x86)\IronPython 2.7\Lib’) # Define the location of the MySQL.Data.dll file sys.path.append(r'E\[Dynamo_Resources]\Deployables') clr.AddReference('MySql.Data') from MySql.Data.MySqlClient import * # Extracts Values Into a List of 1 layer vals = [item for sublist in IN for item in sublist] # Define the connection settings (modify the values as neccesary for your case) conndata = """server = 192.168.1.7; port = 3306; database = dynamostats; user = dynotester; password = xxxxxxxx""" # Connects to SQL database with the provided settings conn = MySqlConnection(conndata) conn.Open() # The MySQL query is defined as a string, in this case as a multiline string. # query1 creates the table if it does not exist yet, outside of this tutorial # it can be done manually once in the database as this will be redundant for # each run after the first query1 = """CREATE TABLE IF NOT EXISTS corestats ( run_id INT AUTO_INCREMENT, script_id VARCHAR(12), script_fname VARCHAR(255), script_ver DECIMAL(4,2), user VARCHAR(255), computer VARCHAR(255), project_fname VARCHAR(255), date DATE, time TIME, completed TINYINT(1), PRIMARY KEY (run_id) ) ENGINE=INNODB""" # This query is used to insert a new row into the table corestats, the names # of the table columns are given and values are expressed as variables. # Note: the 0 at the end of the values represents the status of the Dynamo # script, at this stage 0 means that it hasn't finished or failed to finish. query2 = """INSERT INTO corestats (script_id, script_fname, script_ver, user, computer, project_fname, date, time, completed) VALUES (@val0, @val1, @val2, @val3, @val4, @val5, @val6, @val7, 0)""" # The variable names are defined in a seperate list, save with actual values, # for demonstration purposes. As the statement used later inserts only one # value at a time query2params = ['@val0', '@val1', '@val2', '@val3', '@val4', '@val5', '@val6', '@val7'] query2vals = [vals, vals, vals, vals, vals, vals, vals, vals] # Executes query1, which creates a table for our stats if it does not exist already cmd = MySqlCommand(query1,conn) cmd.ExecuteNonQuery() # Prepares a new command statement against the MySQL database cmd = MySqlCommand(query2,conn) # Since x.Parameters.AddWithValue(a,b) only takes a single value at a time, a # simple for loop is initiated to go through the lists defined earlier for i in range(0,len(query2vals)): cmd.Parameters.AddWithValue(query2params[i], vals[i]) # Executes the statement against the database cmd.ExecuteNonQuery() # This query retrieves the run_id which is the primary key of the table we # created, hence it is a unique entry with which we can identify the data # later on. A condition check is used to select only the same entry with the # same timestamp. query3 = "SELECT run_id FROM corestats WHERE time=@var" # Begins a new statement for query3, defines the values for the variable cmd = MySqlCommand(query3, conn) cmd.Parameters.AddWithValue('@var', vals) # Executes the statement, reads the query results and closes the query rdr = cmd.ExecuteReader() rdr.Read() ind = rdr rdr.Close() # Closes the database connections conn.Close() # The output is the primary key, run_id of table row just inserted OUT = ind
The purpose of the final Python code block is to update the database with the status of the script after it finished with either a succeeded or failed entry. It awaits for the final value from the final Dynamo node of the main part of the functional script and checks its output. If it fails, in most cases, it will output “null” but in some scenarios, when the node is not properly connected or can’t function due to another reason, the output will be previewed in Dynamo as “Function”. However, this is actually expressed as “ProtoCore.DSASM.StackValue”. Therefore, a simple conditional check is used to detect for the “null” value or the “ProtoCore.DSASM.StackValue”. In which case, the script does not execute the SQL part and only outputs a simple status of the code itself. Otherwise, the script will execute the embedded SQL query.
import clr import sys sys.path.append(r’C:\Program Files (x86)\IronPython 2.7\Lib’) sys.path.append(r'E\[Dynamo_Resources]\Deployables') clr.AddReference('MySql.Data') from MySql.Data.MySqlClient import * # Uses the first input as a condition check later, converts it into a string # Connect the last Dynamo node of the main script to IN input of this Python code block cond = str(IN) # Obtains the run_id parameter which is the primary key of the table ind = IN # Define the connection settings conndata = """server = 192.168.1.7; port = 3306; database = dynamostats; user = dynotester; password = xxxxxxxx""" # Connect to SQL database conn = MySqlConnection(conndata) # If statement check is used to determine whether the script has completed # In case of failure, the input, when converted to a string, will likely be # either 'null' or 'ProtoCore.DSASM.StackValue' which is what Dynamo outputs # when some node can't execute due to missing variables. if cond is 'null' or 'ProtoCore.DSASM.StackValue': # res is used as a status variable, 0 means failure, 1 means SQL # database updated succeesfully res = 0 else: # Opens the defined SQL connection conn.Open() # The SQL query is defined as a string, with variables having @ before them # SQL statement syntax is used. Updates the table corestats by the # given condition. query = "UPDATE corestats SET completed = 1 WHERE run_id = @var" # Initializes a new statement that will be executed against the SQL Db cmd = MySqlCommand(query,conn) # Defines the actual values in place of the @var variable specified in # query line cmd.Parameters.AddWithValue('@var', ind) # This statement executes teh command with the variable expressed # above, updates the table cmd.ExecuteNonQuery() # Closes the database connection conn.Close() # Variable to represent status res = 1 OUT = res
One thing to keep in mind is that for these scripts it does not matter if they are executed through Dynamo or through Dynamo Player. The code should not introduce any noticeable delay in script execution times, though that might depend on how much information is being recorded into the database.
As we used MySQL in this example, we can use the MySQL workbench to view the database, inspect the values in question. The above script was executed a few times for illustration purposes, with a purposeful failure during the first run. Hence, the completed variable contains 0, meaning failure during the first run (highlighted in yellow in the figure below). Other entries have the number 1 in them, meaning that these runs were successfully completed. There are countless possibilities for how such status updates could be used for Dynamo scripts and general Revit workflows. The potential for both insights and general troubleshooting is apparent. It all depends on how it is utilized.
While automatic Dynamo script usage statistics tracking can provide valuable metrics it is important to not neglect occasional feedback sessions with users. There is information to be collected there as well, which can be used to verify certain aspects of the insights obtained from the collected data.
For a serious computational BIM workflow implementation of tools such as Dynamo, Grasshopper, automated usage tracking, data collection should not be ignored. The data can enable for actual quantification of the value your design consultancy gains from the implemented Dynamo workflows or individual tools. The usage data can reveal the Dynamo scripts that are used most often or not used at all, can provide guidance on future workflow implementations and reveal pitfalls that should be avoided on future projects. Moreover, having access to such usage data can greatly contribute to the data-driven AEC future, adoption of machine learning based applications. The future will be defined by data as one of the primary resources for innovation.
If you have any questions about this post or would like to consult on how Dynamo workflows could enhance the efficiency of your design teams working with Autodesk Revit, drop us an email at email@example.com or fill out the form below.
If you would like to get in touch with us, you can do so by filling the form below and we will get back to you as soon as possible. Alternatively, you can email us directly at: