Posted on Leave a comment

Serving Files with Universal Dashboard

In order to serve files with Universal Dashboard, you need to take advantage of the Publish-UDFolder feature. In this blog post we will look at how to provide download links for users so that they can download static files as well as files you generate based on user input.

Sharing a Folder

Sharing a folder with Universal Dashboard is easy. Publish-UDFolder allows you to specify a local file path as well as a URL that the user will access to download files.
In this example, I am publishing a folder called Share in my $PSScriptRoot path. The user can request files from the share using standard HTTP clients like web browsers or invoke web request.

$Folder = Publish-UDFolder -Path "$PSScriptRoot\share" -RequestPath "/share"
$Dashboard = New-UDDashboard -Title 'Downloads' -Content {

Start-UDDashboard -Dashboard $Dashboard -PublishedFolder $Folder -Port 10000

Once the dashboard is started, you could request files from the server by accessing it via URLs such as http://localhost:10000/share/users.txt.

Invoke-WebRequest http://localhost:10000/share/users.txt

Providing a Download Link on the Dashboard

To provide a link to files in the dashboard, you can use standard links with URLs that point to the files you would like to share. You’ll notice that when you click this it will open the file in the browser.

$Dashboard = New-UDDashboard -Title 'Downloads' -Content {
    New-UDLink -Text "Download" -Url http://localhost:10000/share/users.txt

To force a download of the file rather than opening in the browser, you can use New-UDElement to create an anchor tag with the download attribute. This will instruct the browser to download the file rather than opening it.

New-UDElement -Tag 'a' -Attributes @{
    'href' = 'http://localhost:10000/share/users.txt'
    'download' = 'myFileName.txt'
} -Content {

Dynamic File Downloads

Sometimes you will want to generate a file from input from the users and then allow them to download it. You can do this by providing a variable to the shared folder path and write files to that path. After writing the files to the path, use the -Content parameter of New-UDInputAction to return a download button to the user.

$SharePath = "$PSScriptRoot\share"
$EndpointInit = New-UDEndpointInitialization -Variable "SharePath" 
$Dashboard = New-UDDashboard -Title 'Downloads' -Content {

    New-UDInput -Title "Create File" -Endpoint {

        $FileName = (New-Guid).ToString() + ".txt"
        $FullFileName = Join-Path $SharePath $FileName
        $FileContents | Out-File -FilePath $FullFileName

        New-UDInputAction -Content (
            New-UDElement -Tag 'a' -Attributes @{
                'href' = "http://localhost:10000/share/$FileName"
                'download' = "myFileName.txt"
                className = "btn"
            } -Content {
} -EndpointInitialization $EndpointInit

You can then use a scheduled endpoint to clean up files that have been generated when users create them.

$Schedule = New-UDEndpointSchedule -Every -Minute 10
$Endpoint = New-UDEndpoint -Schedule $Schedule -Endpoint {
    Get-ChildItem -Path $SharePath | Remove-Item

Start-UDDashboard -Dashboard $Dashboard -PublishedFolder $Folder -Endpoint $Endpoint -Port 10000


In this blog post we looked at how to create published folders and share files with users. We also went over how to create download links and to provide dynamic files to your end users.

Posted on Leave a comment

Understanding Authorization Policies in Universal Dashboard

This post discusses features available in Universal Dashboard

Authorization policies in Universal Dashboard allow you to control the content that particular users have access to. You can limit access to the entire dashboard, to pages themselves or even to individual controls. In this post we will go over how to use authorization policies to create secure dashboards.

The Basics

Authorization policies are tightly linked with authentication. After authentication takes place, the user has a series of claims associated with their session. These claims can be evaluated to determine whether a user has access to a particular resource.

Depending on how you authenticate to your dashboard, the claims the user has will be different. This means that it’s necessary to understand the claim system in order to effectively take advantage of authorization policies. In this post, we will use Azure Active Directory authentication as our example.

Configuring Azure Active Directory

To configure Universal Dashboard for Azure Active Directory authentication, we need to use the New-UDAuthenticationMethod cmdlet alongside the New-UDLoginPage cmdlet. You’ll need to get your ClientId, Instance URI, Domain and TenantId from the Azure portal. You’ll also need to configure an application registration within your directory. The process for that is covered in this document.

Once you’ve registered your application, you can now write the script for your dashboard. You’ll need to use the following script to do so.

$AuthenticationMethod = New-UDAuthenticationMethod -ClientId '1111e4b-45aa-43bb-beae-304028707777' -Instance -Domain -TenantId '11111c97-4b76-4470-a736-8481d71111111'
$LoginPage = New-UDLoginPage -AuthenticationMethod $AuthenticationMethod 

You can now pass the $LoginPage variable to your dashboard and start it. Make sure to listen on the same port as the one you configured in AzureAD.

$Dashboard = New-UDDashboard -Title "Authorization" -Content {

} -LoginPage $LoginPage 
Start-UDDashboard -Port 10000 -Dashboard $Dashboard -AllowHttpForLogin

When visiting your dashboard, you should now see a login page with a “Sign in with Azure Active Directory” button.

Azure AD Login

Once you click the button, you will be forwarded to the Microsoft login page for your directory. Once you login, you will be forwarded back to Universal Dashboard. You should now see the home page.

Authorized Home Page

Configuring an Authorization Policy

To configure an authorization policy, you need to use the New-UDAuthorizationPolicy cmdlet. This cmdlet accepts a script block that will run to evaluate whether the current user has access to the resource they are trying to retrieve.

An authorization policy simply needs to return $true or $false. If the policy throws an error, it is considered $true so ensure that you wrap your code in a try\catch if you do not wish to see this behavior.

The most basic authorization policy is as follows.

$AuthorizationPolicy = New-UDAuthorizationPolicy -Name "Login" -Endpoint {

This authorization policy always returns true and and is named “Login”. You will use this name whenever you want to enforce the authorization policy.

You can now ensure your dashboard uses this authorization policy by passing it to New-UDLoginPage.

$LoginPage = New-UDLoginPage -AuthenticationMethod $AuthenticationMethod -AuthorizationPolicy $AuthorizationPolicy

Using the Authorization Policy

To use the authorization policy, you need to assign it to either a page or to take advantage of the Get-UDAuthorizationPolicy cmdlet. When you pass the name of the authorization policy to New-UDPage, it will ensure that only users that pass the authorization policy will have access to that page. When you invoke Get-UDAuthorizationPolicy from within your dashboard, you will get all the authorization policies that the user has passed.

To use it on a page, simply pass the name of the policy to the -AuthorizationPolicy parameter of New-UDPage. This parameter accepts an array of policies if you desire.

As you can see below, we have a page that has an authorization policy assigned and one that does not.

$Settings = New-UDPage -Name Settings -AuthorizationPolicy "Login" -Content {

$HomePage = New-UDPage -Name "Home" -Content {


$Dashboard = New-UDDashboard -Title "Authorization" -Pages @($HomePage, $Settings) -LoginPage $LoginPage 

When you login, you’ll notice that you have access to both pages. If you go back and change your authentication policy to return $false, you will no longer have access to the settings page. If you try to access the page directly by URL, it will return Page Not Found.

You can also use authorization policies to hide controls on your dashboard. To do this, you can use Get-UDAuthorizationPolicy and then check to see whether a particular policy is returned before returning a control. This needs to be done dynamically so it needs to be in an endpoint script block rather than content.

If I modify my home page to show two columns of controls, I can use Get-UDAuthorizationPolicy to check to see if I should return a control before showing it to the user.

$HomePage = New-UDPage -Name "Home" -Content {
    New-UDRow -Endpoint {
        New-UDColumn -Size 6 -Content {
            New-UDHeading -Text 'Super Public Info' -Size 1
        New-UDColumn -Size 6 -Content {
            $Policy = Get-UDAuthorizationPolicy
             if ($AuthPolicy -contains 'Login') {
                New-UDHeading -Text 'Super Secret Info' -Size 1

If you run the dashboard now, you’ll see that the super secret info is not shown. If you again change your authorization policy back to return $true, you will see the super secret info. You can use this technique anywhere an Endpoint parameter is present.

Checking Claims

Authorization policies would be pretty useless if you didn’t actually check some information about the user. To do this, we take advantage of the claims provided by the authentication mechanism we are using. In this case, Azure Active Directory is returning a bunch of claim information to us. To access this information, we can adjust our authorization policy endpoint to accept a $User parameter. This variable is an instance of an IClaimsPrincipal class. You can use this variable to check whether a user has particular rights.

In our authorization policy, we can now use that variable and call methods such as HasClaim to validate whether a user has a particular claim.

$AuthorizationPolicy = New-UDAuthorizationPolicy -Name "Login" -Endpoint {
    $User.HasClaim("groups", "b69421c1-381f-41e9-9105-1ed85768cde1")

In my environment, the GUID listed is the Object ID of a group that the user is a part of.

Azure AD Group

HasClaim will return true if the user is part of that group. Now, when a user that logs into the dashboard is part of the Dashboard Users group, they will have access to the Settings page and the Super Secret Info control.

Debugging Claim Checks

One of the problems with claims is that they are cryptic and change per authentication mechanism. In order to accurately check claims you will likely want to evaluate the $User variable in the debugger to see what claims are present when a user logins in. To do this, we need to call Wait-Debugger in the New-UDAuthorizationPolicy endpoint.

 $AuthorizationPolicy = New-UDAuthorizationPolicy -Name "Login" -Endpoint {

    $User.HasClaim("groups", "e1b6e95e-6241-4a1a-886d-d5fc0f606f99")

When you go to login to your dashboard, you will now see only some of it load. Additionally, the console output will indicate that it has entered debug mode.

Debugging Authorization Policies

If you now type the $User variable and press enter, you will see the object is present in the runspace. You’ll be able to dump the claims that are currently set on the object. This will provide you insight into how to correctly check for the claims of the user.

[DBG]: PS C:\Users\adamr>> $User.Claims

Issuer         :
OriginalIssuer :
Properties     : {[, amr]}
Subject        : System.Security.Claims.ClaimsIdentity
Type           :
Value          : pwd
ValueType      :

Issuer         :
OriginalIssuer :
Properties     : {}
Subject        : System.Security.Claims.ClaimsIdentity
Type           : groups
Value          : b69421c1-381f-41e9-9105-1ed85768cde1
ValueType      :

Issuer         :
OriginalIssuer :
Properties     : {}
Subject        : System.Security.Claims.ClaimsIdentity
Type           : name
Value          : Adam Driscoll
ValueType      :


In this post we went through how to configure your dashboard to use authentication and authorization policies to validate user claims against Azure Active Directory. The same process holds true for configuration policies for other authentication methods. Check out the docs to learn more.

Posted on Leave a comment

Building a Windows Form app with PowerShell in VS Code

This post uses PowerShell Pro Tools. PowerShell Pro Tools is a set of tools for PowerShell developers to build GUIs, package as executables and more.

PowerShell Pro Tools integration for VS Code provides the ability to generate Windows Forms PowerShell scripts that you can run in Windows PowerShell and PowerShell 7. Windows Forms code only runs on Windows and although PowerShell 7 runs on Linux and MacOSX, the Windows Forms libraries are not compatible.

Installing PowerShell Pro Tools

You’ll first need the PowerShell extension for Visual Studio Code installed. After installing the language extension, next install the PowerShell Pro Tools extension.

Install the Extension

After installing the extension, you will need a license. You can receive a trial license that is good for 21 days by invoking the PowerShell Pro Tools: Request Trial License command from the Command Palette (Ctrl+Shift+P).

Enter your email address and a trial license will be sent to your email within a couple minutes. The license will be attached as a text file to the email.

License Attachement

Open the license file and copy the entire text of the document. Issue the PowerShell Pro Tools: Install License Key command from the Command Palette (Ctrl+Shift+P). Paste the license key into the input box and press enter.

To view your license information after installing the license, issue the PowerShell Pro Tools: Display License Information command from the Command Palette (Ctrl+Shift+P).

License Information

Your extension is now installed and licensed.

Creating a Windows Form

To create a Windows Form, you first need to create a PS1 file. This file will be reponsible for the logic of your form. You can name the PS1 file whatever you wish.

After creating your PS1 file, invoke the PowerShell Pro Tools: Show Windows Form Design command from the Command Palette (Ctrl+Shift+P).

Designer is opening

The PowerShell Pro Tools Form Designer will open as a separate window.

PowerShell Pro Tools Form Designer

The form designer looks and behaves very similar to the form design in Visual Studio. On The left hand side is the designer canvas with the form component. On the right hand side is a collection of controls in the toolbox and a property grid for the currently selected control. At the bottom of the property grid, it shows the currently selected control’s name and type.

Building Your Form

To build you form, you can select a component from the Toolbox by left-clicking on it and then left-clicking on the form to drop the control where it was clicked. Dragging and dropping does not yet work for the toolbox.

After adding the control to the form it will be selected. As stated earlier, when a control is selected, the name and type appear below the control’s properties on the right hand side. Additionally, the Unsaved Form Indicator (*) will appear next to the file name of the designer file path.

Control Information

You can modify the control’s properties in the property grid. Changing the name will change the control’s variable that you will use in the PowerShell script.

Adding Form Logic

In order to add logic to your form, you will need to hook up event handlers to the controls. This can be done in two ways. The first way is to double-click the control you wish to add an event handler to. This will create the default event handler for the control. For a button, that is the onClick event handler. For a checkbox, that is the onChecked event handler. Each control will be different.

If you save and then switch back to the form.ps1 file in VS Code, you will see that the event handler has been created.

Double-click event handler

If you want to create event handlers that are not the default handler, you will need to open the event tab on the Properties Grid. This is the lightening bolt icon.

The property grid will now show all the event handlers for the control. You can see the onClick event handler we created by double clicking. You can enter the name of a new event handler in the textbox to the right of the event name to create it. Make sure to save after entering a name.

Event Property Grid

Packaging a Windows Form app

You can now use the packaging features of PowerShell Pro Tools to create an executable out of the two PS1 files that make up your form. The first step is to ensure that you have the correct dependencies installed for packaging. Since the packaging process uses the .NET Compiler to create the executable, you will need that installed.

Once your dependencies are installed, you can issue the PowerShell Pro Tools: Package as Executable command. Make sure to have your form.ps1 file selected before you do this.

After running this command, you will see output in the Terminal about the success or failure of the packaging process.

A successful package will look like this.

Successful Packaging

Once packaged, you can run the form.exe that was created. The form should show up, just as it would if you were to run the PowerShell script.

The running executable

In addition to creating the exe, a package.psd1 file will also be created to allow you to adjust the packaging settings. You can configure whether to hide the console window, if the application should run as administrator or if you need high DPI support enabled.

PowerShell Core Support

As mentioned earlier, the VS Code Form Designer only works on Windows and on Windows PowerShell or PowerShell 7. Since PowerShell 7 is built on .NET Core 3.0, it now has Windows Forms support. This means you can run the PowerShell Designer and produce forms run under PowerShell 7.

Windows Forms Running in PowerShell 7


In this post we went over how to create a Windows Form application with PowerShell Pro Tools for Visual Studio Code. For more information about packaging, visit out documentation site. To find out about licensing, please visit our store.

Posted on Leave a comment

PSAvalonia – Open source PowerShell bindings for Avalonia

Avalonia is a WPF-style cross-platform UI library. Today, we are open sourcing a PowerShell module to create UIs using the Avalonia library. The Avalonia bindings that were once part of PowerShell Pro Tools are now open source and up on GitHub and the PowerShell Gallery. You can download the latest version using Install-Module.

Install-Module PSAvalonia

You can contribute on GitHub here.

What can PSAvalonia do?

You can use PSAvalonia to create cross-platform UIs that work in PowerShell Core and PowerShell 7. It has been tested on 6.2 and 7. PSAvalonia does not work in Windows PowerShell at the moment.

For example, you could use the same script on both Linux and Windows.

$Xaml = '<Window xmlns=""
        mc:Ignorable="d" d:DesignWidth="800" d:DesignHeight="450"
	<Button Width="160" Name="button">My Button</Button>
        <TextBox HorizontalAlignment="Left" Margin="12,12,0,0" Name="txtDemo" VerticalAlignment="Top" Width="500" Height="25" />
$window = ConvertTo-AvaloniaWindow -Xaml $Xaml
$Button = Find-AvaloniaControl -Name 'button' -Window $Window
$txtDemo = Find-AvaloniaControl -Name 'txtDemo' -Window $Window
$Button.add_Click({$txtDemo.Text = "Hello, World from $($PSVersionTable.OS) running PowerShell Core $($PSVersionTable.PSVersion)"})
Show-AvaloniaWindow -Window $Window
Running on Windows
Running on Linux

Posted on Leave a comment

Processing CSV Files with Universal Dashboard

Universal Dashboard is a web framework for PowerShell. You can download it from the PowerShell Gallery.

Universal Dashboard provides the ability to create user input forms using the New-UDInput cmdlet. In this post we will look at how to create a form that allows the user to upload a CSV file and process it within PowerShell.

UDInput Basics

New-UDInput has two different ways of functioning. The first is to use a param block to define an input form directly from the PowerShell script you specify.
In the below example, I can create a basic for that accepts a couple different fields.

New-UDInput -Title 'New User Form' -Endpoint {

    New-ADUser -GivenName $FirstName -Surname $LastName -Name $UserName

This script would produce the following form.

Form generated from param block

Although this type of form is easy to produce, it doesn’t provide the same level of customization as the -Content parameter of UDInput.

UDInput Content

The -Content parameter can be used in conjunction with the -Endpoint parameter to define the fields that show up in the form. You have more options when it comes to the type of form when using this method.

Instead of creating a form that accepts a single user, we may want a form that accepts a CSV and then creates many users. Just as with the basic form, you’ll still need to provide an -Endpoint script block for processing the input data. Unlike the basic form, the actual field definitions come from the -Content block rather than the param block in the Endpoint script block.

New-UDInput -Title 'Bulk Import User Form' -Content {
    New-UDInputField -Type file -Name users 
} -Endpoint {

    $UserObjects = $users | ConvertFrom-Csv

    $UserObjects | ForEach-Object {
        New-ADUser -GivenName $_.FirstName -Surname $_.LastName -Name $_.UserName

    Show-UDToast -Message "Created $($UserObjects.Length) users"

This will produce a form like this.

Form created with Content and New-UDInputField

After uploading a file, the users will be created in Active Directory and a toast will be shown to the end user.

Uploading a CSV to New-UDInput


In this post we went through how to upload and process a CSV using New-UDInput. Using the -Content parameter you have more control over the types of input controls that you can use in your dashboards.

Posted on Leave a comment

PowerShell Tools 4.7.0 Release Notes

Support for High DPI Windows Forms applications

The packager now supports high DPI forms applications. Use the High DPI Support package setting to enable support. You can also enable this setting via the Merge-Script package config file.

High DPI Setting in Visual Studio

Fixed window flash on start and exit of packaged application

When hide console window is selected, no console will be shown at all. When you exit your application, a console used to flash. This is no longer the case.

Fixed app crash of packaged application

Occasionally, the application would crash when exiting due to a FileNotFoundException.

Posted on Leave a comment

Dude, where’s my var? – Understanding scoping in Universal Dashboard

Universal Dashboard is a web framework for PowerShell.

Universal Dashboard allows users to create websites and REST APIs using just PowerShell. Users of UD expect that it behaves much like any other PowerShell script as it looks just like a PowerShell script. To ensure that the web server provides as much performance as possible, Universal Dashboard uses background runspaces to allow for concurrent execution of script blocks that define the dynamic functionality of the dashboard. Due to this, variable, module and function scoping can be a little weird when dealing with UD.

In this blog post, we will look into the intricacies of scoping with Universal Dashboard endpoints.

What’s in a runspace?

To understand how Universal Dashboard functions, we need to understand a little bit about the runspace feature of PowerShell. Runspaces are somewhat isolated containers for PowerShell execution. When you start a PowerShell console, you will be invoking your commands in the default runspace. You can view runspaces by using the Get-Runspace command.

Windows PowerShell
 Copyright (C) Microsoft Corporation. All rights reserved.
 PS C:\> Get-Runspace
  Id Name            ComputerName    Type          State         Availability
  -- ----            ------------    ----          -----         ------------
   1 Runspace1       localhost       Local         Opened        Busy

Runspaces allow for a single pipeline to be executing at once. This means you can’t run two commands in the same runspace. Because of this, runspaces can be kind of synonymous with threads in other programming languages.

In addition to controlling the execution of a script, they also maintain the constructs that we are used to in any PowerShell environment. Variables, functions, modules, and providers are examples of artifacts that are tied to a runspace. Most of the time we are working with PowerShell, we are dealing with a single runspace so these types of artifacts seem to to be exist indefinitely, and globally, in the environment.

Note there are scopes within the runspace that control the lifetime and accessibility of these artifacts. Read about_Scopes for more information.

When we create a new runspace, that means we can now execute another PowerShell command in tandem and have a completely new set of variables, modules, and functions in that runspace’s scope.

A good way to demonstrate this is to use the ThreadJob module. This module creates new runspaces in the background so that multiple commands can be run at once. The difference between a standard PowerShell job and a runspace job is that a standard PowerShell job actually starts a new process rather than creating a new runspace.

If we start a thread job and then use the Get-Runspace command, you’ll see that we now have more than one runspace.

PS C:\> Start-ThreadJob -ScriptBlock { Start-Sleep 10 }
Id Name PSJobTypeName State HasMoreData Location Command
1 Job1 ThreadJob Running False PowerShell Start-Sleep 10
PS C:\> Get-Runspace
Id Name ComputerName Type State Availability
1 Runspace1 localhost Local Opened Busy
2 Runspace2 localhost Local Opened Busy

To demonstrate how each runspace has its own scope, let’s create a variable in the default runspace and then try to access it in the background runspace.

As you’ll see below, the Receive-Job call did not return a value. This is because $MyVar does not exist in the background runspace.

PS C:\> $MyVar = "Test"
PS C:\> Start-ThreadJob -ScriptBlock { $MyVar } | Wait-Job | Receive-Job
PS C:\>

One thing to note is that runspaces are different than .NET variable scoping. You can define static variables in .NET that are available across runspaces. Assemblies loaded into a PowerShell process are also not tied to a runspace but global throughout the process. Using this knowledge allows us to transfer a variable state across runspaces.

Runspace Initialization

It’s possible to create a runspace and initialize the runspace with lots of different PowerShell artifacts. We can pass in variables, functions, modules and even snap-ins (remember those!?).

This is accomplished with the InitialSessionState class. It provides the ability to define the state of the runspace when it’s created. The ThreadJob module takes care of this for us. UD also has a helper to set up the initial session state. The New-UDEndpointInitialization cmdlet actually creates an InitialSessionState object that is then used to initialize the runspaces it uses when executing endpoints.

PS C:\> New-UDEndpointInitialization -Variable MyVar | Get-Member
    TypeName: System.Management.Automation.Runspaces.InitialSessionState

If we look at the InitialSessionState returned by New-UDEndpointInitialization you’ll see that the variable is set.

PS C:\> $InitialSessionState = New-UDEndpointInitialization -Variable MyVar
 PS C:\> $InitialSessionState.Variables | Where-Object Name -eq 'MyVar'
 Value       :
 Description :
 Options     : None
 Attributes  : {}
 Visibility  : Public
 Name        : MyVar
 PSSnapIn    :
 Module      :

Whenever you execute an endpoint in UD it now has access to this variable because it’s part of the initial session state.

The InitialSessionState class has more options that are available with New-UDEndpointInitialization. You can always call the object directly to add more artifacts to the session state.

PS C:\> $InitialSessionState = New-UDEndpointInitialization -Variable MyVar
PS C:\> $InitialSessionState.Assemblies.Add("System.Windows.Forms.dll")

Runtime Variables

Although initial session state is valuable when passing in global variables you’d like to use throughout your dashboard, they aren’t good for variables that are set during runtime or change based on data brought into the script. UD has a couple of different ways of setting these variables during execution.

Auto-scoped variables

Auto-scoped variables are variables that are available in the child-endpoints. An example of this is where you have an endpoint create a control that also has an endpoint. This requires UD to pass in the variable during execution of the endpoint script block.

For example, you might have a dynamic UDColumn that creates a UDGrid. In this case, we have two different endpoints. If you read the last post on performance, you’ll know that this actually results in two HTTP requests and thus executes two script blocks independent of each other; one for each Endpoint.

New-UDColumn -Endpoint {
     $Data = Get-Data
     New-UDGrid -Title 'Data' -Endpoint {
         $Data | Out-UDGridData

When New-UDGrid is executed, the Endpoint is stored internally as a string.

Additionally, New-UDGrid (or any cmdlet with an Endpoint parameter) then look at the script block to see which variables are defined within it. In this case, it finds the $Data variable is used. It then calls Get-Variable to get the value of $Data and stores that along with the Endpoint script block string.

When the endpoint itself is actually executed, the endpoint script is then parsed back into a script block. Then the $Data variable is set as part of that execution so it is available when the endpoint is running.

This works the same for all the built-in dynamic variables such as $Response, $Request, and $User.

You can think of the resulting endpoint being executed as something like this.

Set-Variable -Name 'Data' -Value $DataFromAnotherRunspace
Set-Variable -Name 'Request' -Value $RequestFromAnotherRunspace
Set-Variable -Name 'Response' -Value $ResponseFromAnotherRunspace
$Data | Out-UDGridData

This effectively moves the variable from one runspace to another. After a runspace is executed, ResetRunspace is called to reset the runspace back to the initial session state. This means that any variables that you set during execution of an endpoint are cleared after execution.

One of the caveats with auto-scoped variables is that they do not work with built-in variables such as $_ or $PSItem. You can see that this doesn’t work in a regular PowerShell console.

PS C:\> Set-Variable -Name _ -Value "test"
PS C:\> $_
PS C:\>

Explicit-Scoped Variables

As seen with the caveat of auto-scoped variables, you can see that certain PowerShell constructs don’t behave as you expect in UD.

For example, the below ForEach-Object doesn’t work as expected. The $_ will be $null in the New-UDEndpoint endpoint. This is because the PowerShell engine is resetting the $_ variable after we set it.

1..100 | ForEach-Object {
     New-UDButton -OnClick {
           Show-UDToast -Message $_

To work around this, you can use the New-UDEndpoint cmdlet to specify an argument list to be passed to the endpoint. The reason this works is that when New-UDEndpoint is called, it is still in the same runspace as the ForEach-Object. This means that we still have access to the current value of the $_ variable. We then store that variable value along with the endpoint and make it accessible via the $ArgumentList variable when it’s executed later.

1..100 | ForEach-Object {
    New-UDButton -OnClick (New-UDEndpoint -Endpoint {
        Show-UDToast -Message $ArgumentList[0]
    } -ArgumentList $item)

Cache and Session Scoping

Cache and session scoping allow variables to be stored globally and per user. The Cache scope is considered global to the current instance of UD. If you set a cache variable, it’s available in any endpoint.

Cache scoping works by defining a PS Provider (like the file system provider). When you set a variable using cache scoping, it’s actually doing something like Set-Item on the cache provider. Internally, the cache provider takes advantage of the ASP.NET Core MemoryCache class to store those variables in memory.

Since the memory cache is global and available in all endpoints, accessing those variables work from any endpoint. Depending on the order of operations, you can encounter issues where setting a variable in the cache happens after trying to access it so you may need to initialize the variable ahead of time.

You can access the cache scope outside of the dashboard entirely.

Import-Module UniversalDashboard
$Cache:Init = 'InitMe'

Session scope works a bit differently. It takes advantage of the current user’s session. When a user connects to UD, they are granted an ASP.NET session cookie. This cookie identifies the user’s session. Data that is stored within the session cache can only be set and retrieved by the browser that has that session cookie.

Aside from that, the session scope works much the same as the cache scope.

In Conclusion

Scoping can be a little weird in UD due to the nature of multiple runspaces. Remember that auto-scoping should work in most cases but you can resort to explicit-scoping when necessary. The cache and session scopes can also be used to avoid these types of scoping issues all together but overuse of the session scope can cause race conditions due to the threaded nature of the web server.

Best practice would be to use auto-scoping, where possible. When dealing with loops and automatic variables, such as $_, you should use explicit-scope. When dealing with user state, use the session scope but know that there is no guarantee as to which endpoint will load first. Finally, the cache variable can be used to globally available data.

Posted on Leave a comment

Best Practices for Universal Dashboard Performance

Universal Dashboard is a web framework for PowerShell. It allows you to create websites and REST APIs with just PowerShell script. Unlike other managed languages, like C# and F#, PowerShell is not compiled completely to highly optimized IL code during a compilation step. Instead, it is parsed, tokenized, interpreted and compiled during runtime. This results in a huge difference in the performance of other IL-based languages. In a framework such as Universal Dashboard, this can be especially evident.

In this blog post, we will look at some performance considerations when looking at using Universal Dashboard as your platform for your next web project.

Base Line

The baseline performance for the Universal Dashboard web-server is below. Universal Dashboard is built on ASP.NET Core. Although ASP.NET Core is capable of extremely high requests per second, Universal Dashboard clocks in a bit lower. The reason for this is that each request must allocate a runspace from a runspace pool, set up the runspace for execution, parse and execute a PowerShell Script Block and then serialize any results to JSON.

You can see below that the web server was capable of about 1000 requests per second on a machine with 8 CPU cores. Your results may vary but this should suffice for most low-to-medium traffic internal tools.

PS C:\Users\adamr> Measure-Command { 1..8 | % { Start-ThreadJob { 1..1000 | % { Invoke-WebRequest http://localhost:10004/test  } }  } | Wait-Job }
 Days              : 0
 Hours             : 0
 Minutes           : 0
 Seconds           : 8
 Milliseconds      : 432

Performance Tips

Cache Whenever Possible

Most of the time, the performance issues people face with Universal Dashboard have nothing to do with Universal Dashboard. Running PowerShell scripts can be slow. The is especially true when accessing remote resources. Users expect quick response from websites. To aid in this, it’s suggested to utilize Scheduled Endpoints and the Cache scope to avoid having to load resources every time a page loads.

For example, assume that I’m calling a remote REST API to load some resources in an endpoint that then displays them in a grid.

New-UDGrid -Title 'Movies' -Endpoint {
     Invoke-RestMethod | Out-UDGridData

Due to the nature of how Endpoint script blocks work, the above would call the REST API every time the page is loaded. If you have many users accessing your dashboard, this means that you will have many calls to

In order to improve the performance of this type of component, you can instead load the movie data on an interval and show users cached data. Instead of each user reading directly from the API, they are now reading from the Universal Dashboard Cache scope’s memory.

$Schedule = New-UDEndpointSchedule -Every 10 -Minute
 $Endpoint = New-UDEndpoint -Schedule $Schedule -Endpoint {
     $Cache:Movies = Invoke-RestMethod
 New-UDGrid -Title 'Movies' -Endpoint {
     $Cache:Movies | Out-UDGridData

You can cache any amount of data you’d like up until you run out of memory on your machine. Be careful with caching large database tables in memory.

Favor Content over Endpoint

Content and Endpoint script blocks can be confusing. The main difference between a Content and an Endpoint script block is that Content is executed at the time the cmdlet is run and an Endpoint script block is executed when a component is loaded on the page.

For example, if we have a New-UDElement with the content below, the script block itself is actually executed when the New-UDElement is called.

New-UDElement -Tag 'div' -Content {
     New-UDElement -Tag 'div' -Content {'I run right away!'}

Alternatively, if you use an Endpoint, something different happens. The Endpoint script block is not run when the cmdlet is executed. Instead, it is cached inside the Universal Dashboard Endpoint Service for execution at a later time. Typically, this later time is when the component is loaded on the page.

New-UDElement -Tag 'div' -Endpoint {
     New-UDElement -Tag 'div' -Content {'I run when the page is loaded!'}

There is a visible performance difference between these different methods. The first method returns all the data during the first HTTP request from the server. The second method requires a second HTTP method to call back to the server to execute the endpoint script block and return the resulting data.

This can be especially tricky when nesting many Endpoint script blocks together. The below example requires 5 HTTP requests to be made.

New-UDElement -Tag 'div' -Endpoint {
     New-UDElement -Tag 'div' -Endpoint {'I run when the page is loaded!'}
     New-UDElement -Tag 'div' -Endpoint {'I run when the page is loaded!'}
     New-UDElement -Tag 'div' -Endpoint {'I run when the page is loaded!'}

The benefit of the Endpoint script block is that it allows for dynamic data and controls to be generated when the page is loaded. This is a huge feature of UD. It’s recommended to wrap the outer most component, where it makes sense, in an endpoint and generate the inner components with the Content script block.

The below example creates an outer most div using the Endpoint script block. This means it will require an HTTP request to load the data for the content of component. The inner components use content so they will not require another HTTP request back to the server. They are still dynamic because they are nested within a dynamic Endpoint script block.

New-UDElement -Tag 'div' -Endpoint {
     $DateTime = Get-Date
     New-UDElement -Tag 'div' -Content {$DateTime.Hour}
     New-UDElement -Tag 'div' -Content {$DateTime.Minute}
     New-UDElement -Tag 'div' -Content {$DateTime.Second}

Avoid Overuse of New-UDElement

New-UDElement is a very versatile component that allows you to create any HTML element, hook up event handlers and set attributes. The downside with New-UDElement is it requires a lot of information to be sent from the server to the web browser. With each New-UDElement call, the tag, attributes, and any event handlers need to be communicated to the client machine.

Using purpose-built controls, such as New-UDChart, require only the data to be sent to the client rather than all the HTML information.

The previous version of Universal Dashboard used New-UDElement heavily for many of the standard components. Rather than defining JavaScript components and then sending data via purpose-built cmdlets, cmdlets such as New-UDButton, New-UDFab, and New-UDCollapsible, defined their entire structure and data using New-UDElement. This resulted in the JSON payload sent from the UD server to the browser to be very large.

For example, in Universal Dashboard 2.2.0, creating a basic UDCollapsible with a single collapsible item was 1203 characters.

PS C:\Users\adamr> (New-UDCollapsible -Items { New-UDCollapsibleItem -Title "Test" -Content { } } | ConvertTo-Json -Compress).Length

In Univeral Dashboard 2.4.1, the same command returns a JSON payload of 397 characters.

PS C:\Users\adamr> (New-UDCollapsible -Items { New-UDCollapsibleItem -Title "Test" -Content { } } | ConvertTo-Json -Compress).Length

Due to this reason, many of the most common controls have now been built into React components. If you want to build your own React components for Universal Dashboard, check out this repository.

Posted on Leave a comment

PowerShell Universal Dashboard – 2.4.1

Bug Fixes

Bug in UD 2.4: running in IIS locally fails w/ Enterprise w/o license – Reported by bielawb

Error while loading UD 2.4 on raspbian – Reported by DanielSSilva

UDTable size and position are not saved in a UDGridLayout – Reported by adamdriscoll

Cannot type in any textbox that is in the UDGridLayout – Reported by adamdriscoll

New-UDMonitor does not update on 2.4 – Reported by rickyxsosa

New-UDButton -Icon wont show selected icon on the button after starting dashboard – Reported by wsl2001

Warning Message in browser after 2.4.0 deployed – Reported by wsl2001

2.4.0: breaking changes or issue in buttons when passing variables? – Reported by PorreKaj

2.4.0 Grid sort is always descending – Reported by PorreKaj

2.4.0 – Hamburger menu icons vary in size – Reported by PorreKaj

Components are always white in 2.4.0 – Reported by PorreKaj