Supplementary Q&A from our August 2019 Webinar

EM Info Admin

Published August 13, 2019
  • Here are a few questions that we didn’t get to on our August 1st webinar, along with answers provided by our presenters after-the-fact.  Please feel free to reach out to them individually if you have additional questions or suggestions.

    Presenters were Brett Alger (NOAA), Farron Wallace (NOAA), and Mark Hager (GMRI).

    Q to Farron: There has been significant development of AI to identify fish through the NOAA Fisheries Automated Image Analysis Strategic Initiative. Have you investigated any of those products, which are currently available on GitHub.  If so, what were your experiences and reasons for developing a parallel track?

    A: Our efforts are working in concert with the AIASI including VIAME. Much of AISI development work is for under-water imagery that may not necessarily work for fishery monitoring work.  In the Pacific Islands one EM researcher has had much success retraining detectors for species identification through VIAME.  GitHub remans an inexpensive means for distributing and developing code among researchers. 

    Q to Farron: Can you outline the goals for the stereo camera project as it relates to larger implementation of this technology? ie. If the technology is proven to be viable, how will it find its way into existing or new EM programs in the US? Is the expectation that private EM companies will incorporate this into their technology?

    A: The primary goal of our stereo camera innovation work is to enable remote monitoring systems to gather accurate length measurements and volume estimation.  These data are essential for stock assessments and catch estimation. In the near future, we will freely provide our system design for our machine vision systems for others and the software that acquires the imagery and sensor data. Implementation of automation will have to be on a case-by-case basis since each fishery will likely differ in operations and species. It will be up to individual/companies to integrate automation into their own systems as this will be somewhat specific for each system design including bandwidth.  Our machine learning algorithms can be retrained for specific fisheries and circumstances.

    Q to Farron: What types of image files are required to train the algorithms? And, how do you link the annotation data (e.g species ID) to a specific still image from the video stream?

    A: Imagery must be processed into JPEG, which is the standard for annotating datasets for training. There are several free annotation tools and codecs available online that can be used to do this. 

    Q to Farron: Are the software algorithms currently available?

    A: Not to the public or NGOs at this time.

    Q to Farron: As a post production person, would there be value if the video could change resolution while running? Low rez until fish, etc. detected?

    A: The idea is that each frame (without a fish) gets dropped in real time so resolution is not important. Resolution may be important depending on field of view and what you are trying to ID.

    Q to Mark: Who is the best person to contact to learn more about architecture, code framework, operating platform, etc?

    A: New England EM code is all open source: https://github.com/openem-team/openem

    Q to Brett: How do the US programs compare with international programs in costs and complexity?

    A: My perspective is that the US is further along than most countries, but behind some countries, in terms of implementation.  For example, Canada and Australia both have established programs with a lot of experience on reducing costs and streamlining their process, whereas fisheries in the E.U and other parts of the world sit closer to the development and planning stages.  In my view, there are finite types of fishing gear (e.g., bottom trawl, longline) and species complexes (e.g., groundfish, large pelagics) to test and implement electronic monitoring, the complexity comes from the governance and management structure in a particular country and/or Region.  For the most part, the technology is no longer a major cost driver, it is more in the program design. With existing tools, programs can be crafted in a way that keeps costs relatively low, and in the U.S., we are making some headway, but we are not satisfied and have a ways to go. I am very optimistic that in the U.S., we will implement new programs over the next 2-3 years, and be able to share our lessons on how to control the costs and complexities.

    Q to Brett: The full benefits of AI in regards to reducing data transmission, film review, and storage don’t happen without wireless EM. But wireless EM/AI onboard can also directly influence government audit level. Do you see a scenario where the government allows AI onboard to independently determine important hauls, and only send back the required review/audit percentage (e.g. 20, 50%)–thus saving data transmission, storage, and review costs?

    A: I think we will get there, but I don’t see it as only a technical issue. I think the AI community needs to continue educating program managers, scientists, enforcement, and others on how the data is collected, stored, and analyzed. The education needs to come from people within and outside of government. There is a lot of trust behind the idea described in your question, and we need to build that trust.

    ###

 

Leave a Reply

 

Your email address will not be published. Required fields are marked *