
One of the biggest confusions about the Vault/SharePoint integrations is how filtering and paging works. It confuses me from time to time, and I wrote the integration. In this article, I’ll try to describe what’s going on behind-the-scenes to give you a better picture of the integration and how to configure it correctly. Ahh, the joys of middleware.
Let’s start with a SharePoint list showing some Vault data.
(click image for full view)
Take a minute to guess how that data got from Vault to SharePoint. Whatever you came up with in your head is probably wrong. I’m not saying you are dumb or anything; I’m saying that the workflow is counter-intuitive.
Basically there are three components in play here: SharePoint, Vault and the integration between the two.

When the user goes to view the list, SharePoint invokes the Integration, the Integration invokes Vault. The data is then pass up through the Integration back to SharePoint. Pretty standard so far.
My example screenshot shows 10 items on the page. So you would think that 10 objects flowed through the Integration. But that’s not what happened. The fact that SharePoint is showing 10 rows/page is a view setting, which include page size, column layout, sorting, etc.
Here is the important thing that you need to understand:
The integration does not know what the view settings are. |
For example, the Vault Integration does not know to ask Vault for 30 objects. It also doesn't know, which columns to sort by, which page the user is viewing or any of the other things in the view model.
So, what does the Integration use as criteria when it pulls data from the Vault? The answer is Data Source Filters. These are the only things that SharePoint communicates to the Integration. The filters themselves are defined by the integration and will be different depending on the Vault object being shown. In this example, the Integration defined four filters. The values are configurable, but you can't add or remove filters.
(click image for full view)
Basically, the integration only gets to see the part in red. The rest of the list settings are the SharePoint’s view model.
Let’s take a look at the Limit filter. This one is important because that’s how many objects the integration asks for. It has nothing to do with how much SharePoint shows on the screen. If you leave this field blank, the Integration uses the default value of 100. You can set the limit higher if you want, but that may slow things down.
The other filter values are pretty straightforward. Set the State to “Released” and the integration will ask for only Released objects in the Vault. Set the Vault Folder to “$/Projects” and only files under that folder will be returned from Vault.
When the Vault Objects are handed from the Integration over to SharePoint, the view settings are then applied. This includes sorting and filtering. If there are 100 Vault objects, but the view has only 10 items on a page, then the other 90 objects are discarded.
In the end, you have two levels of filtering:
- First Level - Data that the Integration requests from Vault. This is controlled by the Data Source Filters.
- Second Level - SharePoint view settings. This is controlled by the other SharePoint settings.
Let’s walk through the whole process, just to drive the point home...
- A SharePoint user views a Vault list.
- SharePoint invokes the Integration, passing in the Data Source Filters for that list view. (ex. limit=100)
- Integration queries objects in Vault, based on the Data Source Filter values. (ex. the Vault query is limited to 100 results)
- Vault returns the object set. (ex. 100 objects)
- Integration passes the object set back to SharePoint. (ex. 100 objects)
- SharePoint constructs the view, discarding anything not in the view. (ex. display first 10 rows sorted by name, discard other 90 objects)
My Take on Vault Data Standard
Earlier this week I listened in on a webinar on Vault Data Standard and it got me to thinking. Coming from a Vault API background, I have a different view of Data Standard then what is usually presented. So I’d like to provide my thoughts since this is a blog and all....
APIs on top of APIs
Data Standard provides a lot of features you can get through the API. You can create custom commands, custom UI and make calls to the Vault server. What’s interesting is that Data Standard itself is a Vault Explorer plug-in. So the Data Standard API is basically an API on top of the Vault API.
It’s basically like the movie Inception, but with APIs instead of dreams.
Many pieces of the Data Standard just pass through to the Vault API. Adding custom commands is an example of this. The settings in the DS .mnu files are exactly the same as the properties on CommandItem from the Vault API. In this aspect, knowledge of the Vault API transfers directly over to Data Standard.
Another example is the $vault object that shows up in the .ps1 files. The purpose of the object is for making web service calls to the Vault server. $vault is a WebServiceManager object being passed in directly from the Vault SDK DLLs. If you want to do anything with $vault, you need the Vault API documentation on-hand. Again, if you are already familiar with the Vault API, than this is no problem.
What I’m more curious about is the people without a Vault API background. How well are they able to utilize Data Standard? Does DS ease them into the world of Vault programming, or does the Vault API hit them like an impassable brick wall?
Beyond the API
The stuff that interests me most is the stuff that can’t be done through the Vault API.
First and foremost, the CAD plug-ins are awesome. Data Standard is not just a plug-in to Vault; it plugs into AutoCAD and Inventor as well. That way you can easily create an Inventor dialog that is Vault-aware, for example. Going through the traditional APIs would be a daunting task. You would need to be an expert in both APIs and would have to figure out how to hook the two together. DS solves all that stuff for you in a way that makes it look easy.
Another aspect of DS are the template features. New CAD files can be copied from a template instead of staring from a blank file. DS uses Vault functionality to centralize the storage of the templates. This is less of an example of a generic API and more of an example of a focused solution. Data Standard is really a dual product: It’s an API, and it’s an end-user utility.
The two aspects are not at odds with each other, but I’m not sure if they blend well together either. It feels to me more like a Swiss Army knife. It’s a bunch of seemingly unrelated stuff packaged together. Maybe that’s why it’s a hard product to describe it to people.
Migration
One thing like like about compiled programming languages, such as C#, is that you get compile errors when something goes wrong. That way if an API changes, you know right away what broke. PowerShell, however, is a scripting language. So it’s harder to find the breakages. Usually, they show up at runtime and only if specific pieces of code are run.
If you have a lot of PowerShell code in your Data Standard implementation, you may find it hard to maintain. Everything will seem fine at first after an upgrade, but things start failing as people start using the custom functions. Even if you think you have everything updated, a CAD user somewhere may hit lesser-used code that calls an obsolete Vault API function.
Yes, Vault provides compatibility for older versions of the web service API, but in order to use them, you need to have the older SDK objects. If Data Standard is passing in the 2015 version of the WebServiceManager, then you can’t make use of the 2014 server APIs.
There are lots of ways to solve, or minimize these issues when they do come up. For now, Data Standard is new, so nobody has run into migration issues... yet.
Posted at 05:39 PM in Commentary, Vault | Permalink | Comments (8) | TrackBack (0)
Tweet This! |