Automating a PowerBI and Fabric Assessment Tool
This post will be a bit different than the other posts thus far. This will be part 1 which will cover the basics of this wheel based Python package. It will also cover building a custom execution environment to run this tool which requires Python 3.11 or newer.
How I got here
Quick story time….
In my current role I am often asked to take a process and wrap some automation around it. The current PowerBI Admin and Data Teams had relied on a somewhat poorly written PowerShell version of a PowerBI assessment tool that had originally come from Microsoft.
It was functioning but just barely. I was asked to remedy this situation so several teams could use the data it outputs to support a PowerBI dashboard.
This tool was provided to me by a PowerBI admin who did not have the expertise to fix a couple things in it that were broken(FYI, I won’t be covering those here). The team also asked if I could automate the process completely so it was hands off.
PowerBI Assessment Tool
This tool is a well designed, modern python tool that can be packaged up as a wheel file or it can be used as a module in another piece of code or notebook. If you’d like to take a look, the code for this is hosted on my GitHub. Please beware that there could be errors in the README as it was generated by Copilot.
Part of the goal of this was to convert this to use an Entra Service Principal for all authentication. I won’t go into extreme detail for this piece. Permissions can be a bit tricky to get right. If you need guidance, please reach out to me via email.
To validate that the tool worked I wrote a small script to import the package and then run it with the credentials and secrets supplied from environment variables.
# Create virtual environment
python3.11 -m venv .venv
source .venv/bin/activate
# Install the wheel file, navigate to the directory if needed
pip install builtin/fabric_audit-1.14-py3-none-any/fabric_audit-1.14-py3-none-any.whl
# Replace these jinja variables with your own values
python3.11 -m fabric_audit -c "file://{{ output_dir }}" \
-m "{{ pbi_premium_metrics_id }}" \
-e "{{ pbi_environment }}" \
-a "{{ pbi_authentication }}" \
-sp "{{ pbi_client_id }}" \
-spt "{{ pbi_tenant }}" \
-sps "{{ pbi_client_secret }}" \
-f
Readying for Ansible
This tool was relatively easy to get running once permissions were applied correctly. However, there were still some steps to get this to run via Ansible Automation platform. There were several issues to solve. I will cover the issue of the Python version dependency in the execution environment(EE). The others will be in part 2 with the playbook creation.
This tool was written in a version of Python that’s newer than the default version inside the supported EE containers provided by RedHat. Thanks to Python’s ability to be installed side by side I decided to simply install Python3.11 as part of a new EE build.
I added the below to execution-environment.yml that’s used during the Containerfile creation with ansible-builder.
append_final:
- COPY fabric_audit-1.14-py3-none-any.whl /tmp/fabric_audit-1.14-py3-none-any.whl
- RUN python3.11 -m ensurepip --upgrade
- RUN python3.11 -m pip install /tmp/fabric_audit-1.14-py3-none-any.whl
- RUN echo "Built by Justin on `date +%Y-%m-%d_%H:%M`" > $MNT/etc/motd
Once the build was complete, I was then able to test running the tool using some flags to limit the time it would take to run.
That’s where I will end part one. Part two is coming soon. That will cover wrapping this tool in Ansible so it could deliver the assessment data to a SharePoint site for PowerBI consumption.
Thanks for reading!