Assessment Services Ideas Portal

Submit, view and vote on ideas or requests

To submit support ticket, visit https://xyleminc.atlassian.net/servicedesk/customer/portal/82

Video GIS Deliverables

OK – new pie-in-the-sky idea. Long email alert – none of this is urgent, but it is something I’ve been thinking about for a while and I’m interested to hear your thoughts.

As you all know, we have “video” deliverables with EM, sometimes as a stand-alone inspection. There is no GIS deliverable – usually a table of observations & the standard EM GIS deliverable (I believe). I think we could create an AMAZING GIS deliverable out of this (ok, maybe not amazing, but pretty cool). Basically, I want to take the same linear referencing concept that we use for mapping pipe lists and leaks, but apply it to the video. Here is what I am thinking (I don’t necessarily know HOW to do most of these tasks – just spit-balling an idea).

  1. Step 1 would be to take a video file (or files) and split it up frame-by-frame into individual images with time stamps for each image.

    1. Audio is not necessary (I don’t think)

    2. We probably don’t need every single video frame – it would likely need to be simplified a bit.

    3. I imagine there are already automated ways to do this

    4. We are assuming that the robot is continuously moving in the examples below – which I know is not the case. I think that can be fixed with “process” though, so I’m not addressing that challenge here.

    5. Those time stamps would essentially be a distance. For this example, let’s assume that we have a 30 minute video for a project that covers 1 mile.

    6. The output would be something like a CSV file with 2 fields – a timestamp/distance (e.g. 29.50 = 29 minutes, 30 seconds) and a URL to the image still that correlates to that time stamp.

  2. That output table would then go through a Python Script that does the following:

    1. Temporarily exports our inspection path into a temporary Geodatabase

    2. The inspection path is converted to a network Route

    3. The route is then defined so that the beginning of the route distance = 0 and the end distance = the last timestamp in the video (30 in this example). If there are specific features/bends/etc. tied to time stamps, that would also be reflected in the video.

    4. The CSV file is than are then turned into point locations along the line. The result is that we now have point features with time stamps and URLs to images

    5. The images are then added to the point features as attachments (automatically based on the URLs).

    6. The geodatabase is then exported, now including the standard EM pipe-by-pipe-GIS layer, but now also includes a video layer of points and associated images.

  3. This layer could then be loaded into ArcMap and the user could see the video stills as they explore the line.

  4. After I typed all of this up, I realized that we have odometers, but I have no idea if that odometer info is tied into the video – if that’s the case then our timestamp data would be replaced by odometer data in the example above.

I have done a bit research today and I couldn’t find an “out of the box” solution for this, however I believe CCTV companies have their own proprietary software packages that do this for their own data (so I know it’s possible). Unfortunately I couldn’t find anything that already exists. This is probably one of the least elegant solutions to this challenge, but it seems like it would be the easiest to implement. What would be awesome is if we could hit “play” on the video and have it show the appropriate location on the map. I think for that situation, we’d have to work in reverse – i.e. edit the video metadata and apply lat/longs to each video frame. Event then, I have no idea if we’d be able to get ArcMap to read that info. I’m assuming that we’d have to develop some type of custom ArcGIS add-in (tool bar) to go along with the video files (and that unfortunately becomes another piece of software that we’d have to update every time Esri released a new version of ArcMap/ArcGIS Pro). A web-only GIS deliverable might allow us to accomplish this without having to provide any type of software add-in.

The least elegant solution would likely be to pick out 1 or more frames per pipe stick and have them associated with the resultant pipe list GIS deliverable. We could then add those images as attachments to each pipe stick. So when you click on a pipe, you will get at least 1 image still from the video. I would assume that pipes with visible defects might get more images. I’m assuming we could probably automate the image selection process in some way or another.

The primary concerns on my end would be:

  1. the size of the deliverable (does this become a 100gb deliverable?)

  2. Would this significantly increase the level of effort? The video review analyst would need to identify appurtenances, etc. with time stamps so that we could map it appropriately, and the GIS analyst would need to create the mapping data. Most of this would already be done if there was an associated EM Inspection, but there would likely be at least a slight-uptick in the level of effort compared to a purely-EM project.

  • Eric Toffin
  • Jun 8 2020
  • Future consideration
  • Eric Toffin commented
    8 Jun, 2020 10:18pm

    A couple initial comments – given the ROI focus required given the COVID revenue impacts we need to think about the business case for this initiative. What have you used to build the case for GIS deliverable improvements in the past? I recognize this will improve client facing deliverables (intangible), we will struggle to get resources if it is not part of a bigger push or has an tangible cost-benefit.

    As I mentioned earlier there is a Video Deliverable standardization initiative on the Pipe Wall Roadmap, which the GIS deliverable could be a smaller part of. This initiative would encompass standardizing the video deliverables of PW/ROB/PD and improving how this dataset is delivered, presented and sold to clients. It may make sense for the GIS video concept to be part of this bigger initiative, as many of the same pieces need to be in-place for both to work. I’d like to kick off a working group to frame this problem in the coming month to more clearly understand what pieces need to come together and the corresponding impact and CAPEX ask.

    Regarding the technical aspects – I reached out to James/Carlos regarding the robotics video timestamps – every frame has a timestamp and the video file is overlaid with the corresponding distance. Distance is also collected on a log file, which has a matching timestamp to the video.

    PipeDiver video might actually be more challenging given the various video feeds, and I believe there can be some drift between these given the way white-balancing is handled.