Training sessions are $25 each for USGIF members, $35 each for non-members.
Wednesday, Oct. 6
Click on the training session title below for more information.
7:30 – 8:30 AM
Objective and Outlines:
The use of small drones is becoming an integral part of the daily activities for the civilian and the geospatial intelligence communities. However, many within these communities are lacking the proper knowledge on what it takes to produce accurate and georeferenced ortho-rectified maps and 3D model of the terrain. The proposed training and education session focus on providing the participants with the foundational knowledge on the following important topics: UAS Types and Capabilities, Datum and Coordinate Systems, Fight Planning and GSD, Ground Points, Introduction to Aerial Triangulation, UAS Camera Type, Camera Positioning and Orientation, Strength and Weakness of UAS-based Photogrammetry, Product Accuracy and ASPRS Standard, Data Processing: Pix4D Workflow, UAS-derived Products Accuracy Evaluation, Hybrid Approach to UAS Use and Mapping Products Generation from UAS.
- Recognize strength and weakness of drones for geospatial mapping products generation
- Understand factors impact the quality and accuracy of mapping products from drone
- Understand how to use mapping products from drones to augment present mapping technologies and sensors from manned aircraft
- Learn how to create the best mission plan using drones
Qassim A. Abdullah, Ph.D., CP, PLS
VP & Senior Chief Scientist, Woolpert, Inc.
Adjunct professor with Penn State and University of Maryland (UMBC)
Dr. Qassim Abdullah is an accomplished scientist with more than 40 years of combined industrial, research and development, and academic experience in analytical photogrammetry, digital remote sensing, and civil and surveying engineering. His current responsibilities include designing and managing strategic programs to develop and implement new remote sensing technologies focused on meeting the evolving needs of geospatial users. Currently, Dr. Abdullah is a lead research scientist and a member of Woolpert Labs team. In addition, Dr. Abdullah serves as an adjunct professor at the University of Maryland, Baltimore County and at Penn State teaching graduate courses on UAS, Photogrammetry and Remote Sensing.
My young daughter will often ask me a question followed by another question, followed by another one. An endless cycle of ‘whys’… This curiosity is a human instinct that can be applied to the world of intelligence analytics. Many analysts, after crunching data post their results to a dashboard and believe their work is done. In their eyes, the end product has all the answers. I will show that a dashboard as just one stop along the journey. Asking the five ‘W’s or five why’s is an important tradecraft that analysts need to understand and have the tools to do so. Having the flexibility to dig into data, explore information and seek those unknown unknowns is critically important for answering what’s going on.
This training session will cover two primary sections.
- To examine the investigation methods of how analysts can see and understand the who, what, where, when and why that resides in their data.
- To practice with several data examples and present how easy it is to find further insights.
I will finish with some tips and tricks for good dashboard design too. Exploring with data can be fun. Analysts with the curiosity of a kid can answer those endless ‘whys’.
Ross Paulson is Sales Consultant on Tableau Software’s Federal sector team, where he provides technical support for a large DOD/IC customer base. He has dedicated a substantial portion of his career to the education and training of analysts, peers and customers on geospatial analytic workflows, production strategies and data visualization efforts. Specifically, Ross spent five years in the US Army Constant Hawk program, including three years where he designed and executed a training program for deploying hundreds of analysts. In his four years at NGA, Ross completed hundreds of group training sessions with analysts in the realm of the ABI tradecraft. Earlier in his career, he trained 50+ deploying analysts for the Blue Devil program that was going to Theatre. In his current role, Ross leads a variety of topical customer training sessions, as well as internal peer training, on usage of a data visualization product.
Throughout his 20-year career, Ross has worked in and around the DOD and Intelligence community. Prior to joining Tableau, Ross worked many years at NGA supporting and evangelizing NGA’s Activity Based Technologies as a SETA in the CIOT KC. Earlier in his career, he worked with the US Army to develop and field a complex surveillance platform called Constant Hawk, where his focus was initially to build intelligence products, but quickly shifted to designing the overall analytical workflows to maximize the analysts’ ability to extract intelligence. Mr. Paulson graduated from Virginia Tech with a degree in Geography.
Learn about the recently released version 1.0 of the SpatioTemporal Asset Catalog (STAC) specification and the Cloud Optimized GeoTIFF (COG) format by implementing a simplistic analytic using a publicly available Sentinel-2 data set. Participants in this workshop will select a point of interest from around the globe, query a publicly available STAC service of Sentinel-2 imagery and produce a time-lapse video and a change detection visualization. This exercise will give participants hands-on experience with both a STAC compliant web service and content in COG format.
The STAC specification, https://stacspec.org, provides a common language to describe a range of geospatial information, so it can more easily be indexed and discovered. If you are a provider of data about the earth, STAC is driving a uniform means for indexing assets. If you are building infrastructure to host, ingest, or manage collections of spatial data, STAC’s core JSON is the bare minimum needed to describe geospatial assets, and is extensible to customize to your domain.
COGs, https://www.cogeo.org, are regular GeoTIFF files, aimed at being hosted on a HTTP file server with an internal organization that enables more efficient workflows on the cloud. It does this by leveraging the ability of clients issuing HTTP GET range requests to ask for just the parts of a file they need.
The Sentinel-2 mission is a land monitoring constellation of two satellites that provide global coverage of the Earth's land surface every 5 days, making the data of great use in ongoing studies. https://sentinel.esa.int/web/sentinel/missions/sentinel-2
- Example Python code will be provided to implement simplistic version of the complete process, no algorithm development experience is required.
- Participants should have basic Git and Python skills to clone and run the lab.
- Advanced participants are encouraged to participate in a competition to improve upon the simplistic change detection algorithm included in the lab.
Andy Spohn is a Solutions Architect at AWS with a focus on geospatial content who previously worked as a GIS cloud software developer at DigitalGlobe/Maxar.
A digital twin is a virtual representation that serves as the real-time digital counterpart of a physical object or process. Digital twins of geographical locations are relatively commonplace throughout Europe and Asia, but just emerging within the United States. Many often think that it is complicated, costly and time-consuming to create a digital twin of a city. This 60 minute course will give both the technical and non-technical attendee a greater understanding of the concepts and steps involved in the rapid creation of a robust, interactive digital twin model from existing datasets. Attendees will leave the session with concepts and ideas that they can immediately put to use as they consider how digital twins can enhance their missions.
The course will start with a brief review of what a digital twin is and how it’s used. The bulk of the course will focus on the steps needed to create an interactive digital twin model that leverages CAD and BIM properties. Along the way, the instructor will debunk misperceptions and identify the necessary foundational data sets needed. Participants will understand how to leverage the datasets they have on-hand today and discuss important aspects of importing those data sets into a virtual environment.
- Understand what a digital twin model is and how it can be used.
- Identify what datasets can be used to create a digital twin model.
- Discuss some of the important aspects to consider when creating a digital twin model.
- Review the steps needed to create an interactive digital twin model.
Needed prerequisite knowledge and skills: Basic understanding of GIS and CAD principles
Joe Travis is VP of User Success at Bentley. He is a 25-year veteran of geospatial technology engineering, education and solution development. He has in depth GIS experience and technical knowledge across multiple solution platforms and industries. With expertise that ranges from engineering / mapping technician to GIS operations manager through to product manager and now head of user success, Joe is adept at bringing technical GIS concepts to life for audiences of all skills and knowledge levels.
This hands-on training session will allow imagery analysts at any level of SAR expertise to learn how to use the Capella Console to rapidly order high resolution commercial SAR through the latest software technologies and via API. Attendees will advance their remote sensing and imagery analysis skills as a technical competency in the GEOINT Essential Body of Knowledge.
During this training we will explore the use cases of X band Synthetic Aperture Radar (SAR) remote sensing ranging from 0.5m to 1.2m resolution, for high resolution applications with unclassified imagery. We will then provide step-by-step instructions on how to task the Capella small satellite radar constellation through a secure and privatized GUI as well as by API to task in near-real-time. This demonstration will also provide instruction on ordering imagery for three different imaging modes, tasking tiers, window times, scene length, look direction, look angle, orbit state, azimuth resolution, ground range resolution and the number of looks.
By the end of the session, attendees will understand the foundations of SAR to enable application-specific tasking for GEOINT, including monitoring objects over land, steep terrain, and maritime locations.
Learning Objectives and Pre-Requisite Knowledge:
- Responsively task commercial SAR satellites
- Gain familiarity with commercial SAR GUI and API systems
- Create foundational knowledge for SAR applications
- Task, collect and acquire SAR imagery over multiple locations
Prerequisites: No pre-requisite knowledge required.
Presenter: Milo Vejraska is Lead Sales Engineer at Capella Space. Milo is a Remote Sensing & GIS Professional with extensive experience in technical consulting and in-depth knowledge of implementing GIS and data software solutions for industry leaders in energy, environmental science, government, defense, mining, & construction. He currently employs his technical skills on behalf of Capella Space to help government and commercial customers extract value from SAR data.
We propose a hands-on class that uniquely orients attendees to the landscape of big data + production grade geospatial workloads and is able, in many instances, to replace bespoke storage layers in favor of cheap cloud object storage without sacrificing performance. The training block engages modern, cloud-native strategies for Geospatial Data Management as well as several Emerging Competencies within Data Science, Use of Varied Datasets, and Automation. The material introduces a handful of geospatially-oriented use cases cast through the lens of the EBK and bridges traditional and single-node workloads into distributed approaches. It offers scaled examples of ingesting various spatial formats for distributed processing, joining that data with other sources and sinks, running distributed spatial operations, and advanced analytic techniques such as clustering with GEOSCAN, an optimized OSS Spark + H3 DBSCAN recently developed.
The examples run in the Databricks Lakehouse Platform optimized for training but would leverage OSS frameworks such as Apache Spark™, Apache Sedona™, Delta Lake, H3, Koalas, and MLflow to ensure ease of portability. In addition, concepts around automation of geospatial workloads, building multi-hop data processing pipelines, strategies to understand the veracity of data and data governance will be briefly introduced. Finally, the material provides a brief overview of managing the machine learning lifecycle – inline with building spatial analytics – using MLflow features for experiment tracking, projects, registry, and deployment. Databricks has deep competencies across the spectrum of this proposed content. We look forward to delivering a high quality in class experience for attendees at GEOINT 2021.
Prerequisites: None. Material will be presented in a way to level-up attendees on identified EBK categories regardless of their starting point.
Michael has been with Databricks since early 2017 and currently manages National Security Solution Architects. His job is to lead customer adoption of customized solutions which process and analyze data at cloud-scale using Databricks Lakehouse Platform, a powerful + simple platform to unify all data, analytics and AI workloads. He earned an EE B.S. from West Point in 1999 and recently completed a Software Engineering MA from Harvard. Before joining Databricks Michael performed engineering functions within an identity management product, a commercial semantic network analysis product, and researched graph analysis techniques over motion imagery processing systems. He is also one of the founding members of the Geospatial SME group at Databricks. Within that role he has worked with a variety of popular spatial frameworks and their corporate sponsors. He is the author of a handful of blogs, notably Processing Geospatial Data at Scale with Databricks, and has spoken at numerous Databricks sponsored workshops, webinars, and events. More on his background at LinkedIn.
Contact: Michael Johns (email@example.com | 571-334-8809)
2:00 – 3:00 PM
GEOINT missions are unique, made so by key intelligence questions, order of battle, regional climate and vegetation, and even by imaging opportunities and operations. These variables, and more, make generalizing CV for intelligence purposes a challenge. A CV model trained on tanks in China’s south won’t perform well detecting and classifying Syrian tanks in the desert. This makes following a centralized approach to model development and sustainment problematic. Worse still, budgetary realities act as a forcing function that prioritize which accounts, missions, and analysts receive CV models and those that do not. Finally, any successful approach to CV model creation is limited by the supply of competent AI/ML engineers, which is still far below the demand. Thus, the DoD/IC must follow a distributed approach, where code-free tools enable individual analysts without AI/ML skills to build and maintain the CV models they need. This approach would 1) improve CV model performance against unique missions, 2) reduce development times to build and maintain models, as well as long-term reliance on contractor support for those models, thus reducing costs, and 3) empower analysts today, while also ensuring account parity, by bypassing the need for extensive AI/ML workforce training.
Attendees will gain an understanding of the necessary components of the end-to-end pipeline for the computer vision (CV) model creation process, as well as learn how to use CrowdAI’s code-free AI enablement tools to create and sustain their own CV models for GEOINT.
Prerequisites: NO prerequisites (no data science, coding, or AI/ML skills required).
Instructor: Devaki Raj, CEO and Co-Founder, CrowdAI
Devaki Raj is CEO and Co-founder of CrowdAI, which offers the leading software infrastructure to build customized vision AI. She completed her undergraduate and Masters in Statistics at the University of Oxford, and subsequently used her expertise at Google (Maps, Energy and [X] teams) to optimize renewable energy allocation in sub-Saharan Africa. She co-founded CrowdAI in 2016, and has since been named as one of Inc and Forbes' Magazine’s 30 under 30. CrowdAI has been named by TechCrunch as one of Y-Combinators top 8 companies to watch, by Forbes as a Top 25 Machine Learning Company, and was an NVIDIA Deep Learning Inception Awards Finalist.
Weather and climate mapping are applied far beyond evening news broadcasts. Businesses, government agencies, and individuals rely on weather data and information products to drive economic growth, protect lives and property, and support decision-making. Although climate change has become a major political topic, most people have never explored climate data or models of possible future climates. The data is often stored in scientific file formats that require specialized software and can seem unintelligible to those unfamiliar with climate terms and concepts. Learn how to leverage and understand climate data at both local and global levels, as well as how climate might change in the future. Gain understanding of major climate concepts and familiarity with real climate data.
- Discover climate data sources and unique ways to visualize the data
- Understand common climate terms, concept and data types
- Explore resources and tools available for applying GIS to climate
Matt Woodlief is a Solution Engineer on Esri’s National Government team. He has a decade of experience working in GEOINT at all phases of the Intelligence Cycle. He enjoys discussing ways to solve spatial problems and helping people get the most out of their ArcGIS deployments. When not at his computer, he enjoys running and hanging out with his wife, son, dog, and 2 cats.
In the past decade,technical and regulatory developments in the space industry have lead to increasingly enhanced global communications and innovations in earth observation, namely imagery. Yet, within the signals domain – satellite-based radio frequency (RF) sensing has been one of the last frontiers to be commercialized. RF data are drasticaly unique and different compared to traditional satellite data (EO/SAR) but work together in tandem to provide timely and actionable insights. So, this session will providing a fundamental overview and delve into the techniques required to extract the full value of this nascent phenomenology.
In this course, we’ll cover the four GEOINT Essential Body of Knowledge Competency areas by introducing the technical approach for RF geolocation and delivering hands-on training of how to visualize and exploit the data sets within GIS and RF analysis tools.
Attendees will leave this training with a fundamental understanding of:
- What RF Gelocation is and how it’s commonly done
- Examples of RF-emitting devices within the RF spectrum
- Common applications for RF monitoring
- How to interpret RF data and analytics to derive intelligence using GIS & analysis tools
- How RF can augment other sources of intelligence
James Doggett, Director of Operations at HawkEye 360. James is responsible for overseeing systems engineering and project management support across the entire company, and also leads all government programs. Prior to joining HE360 James provided modeling, simulation, and analysis support to the Principal DoD Space Advisor Staff (PDSAS). Previously, he was an Aerospace Engineer at the US Naval Research Laboratory where he worked on a wide range of space-based PNT projects. James holds a Master’s Degree in Systems Engineering from the University of Virginia and a Bachelor’s Degree in Aerospace Engineering from the University of Maryland.
Stu Cox, Product Manager at HawkEye 360. With more than 15 years' experience leading product definition and development, Stu plays an integral part in driving product development and innovation for HawkEye 360’s RF data and analytics products to support a myriad U.S. government, international government, and commercial customer needs. Prior to Hawkeye 360, he was most recently a Senior Principal Product Manager at Instelsat where he managed land mobility solutions serving the global market.
Remember when everyone had a different definition of what a cloud was? How about Edge Computing? Want to get a better understanding of what edge computing is? Want to figure out if it's something that can help your organization? Let's talk about that. Join Red Hat for an informative session around Computing at the Edge!
We'll discuss what edge computing is (it's simpler than you think), the workloads that can benefit from it, considerations for implementation and where to go from there. Whether you're a developer, tech lead or business process driver, you'll learn something useful in this session. We'll cover the general technology along with use cases relevant for GEOINT attendees.
Mark Shoger is a Solution Architect Manager at Red Hat. Mark's Solution Architecture teams deliver solutions to Red Hat government customers where they help integrate Red Hat products and services into their customer’s environments to solve their unique and challenging problems.
Mark is first and foremost a Linux engineer. He has been a Windows sysadmin, Apple engineer, VMware administrator, storage administrator and architect. Now as a Red Hatter, Mark has completely shifted to an Open Source, Linux frame of mind and practice. Open Source is a way of life, not a buzz phrase.
Scenario-based training is critical for the continued advancement of geospatial analytic tradecraft. The growing volume, variety, and complexity of GEOINT data requires analysts to master complex tools and skills to perform efficiently and effectively. This mastery can take months if learned on the job. Immersive, scenario-based training embedded in realistic simulated data sets speeds up this process. In this type of training, analysts learn by doing, applying new skills in a natural workflow as they solve realistic intelligence problems. Whitespace is wrapping up a project for NGA to create immersive training scenarios within simulated non-pixel (not imagery or video) data sets each consisting of a dozen types of simulated, co-collected GEOINT data. A scenario-based analytic challenge is embedded in each dataset. The dataset and scenario are tailored to the course objectives that compels students to apply the lessons learned in class in a realistic, hands-on environment.
The learning objectives for the training session will be to understand the steps involved in the process of planning, scripting, creating, and validating a training scenario of simulated entities and activities embedded in simulated data. While the session will touch on specifics of how we used our Worldline simulation engine in our work for NGA, we will focus on the broader concepts of creating an effective training scenario within any simulated data set. Attendees should be familiar with creating or using simulated data (as an individual data type or a data set) and with creating or participating in training scenarios (either live or virtual).
Patrick Kenney is Manager, Data Simulation & Analytics at Whitespace. He has been creating and instructing analytic skills training for seven years. Many of these courses are centered on scenario-based training where students are taught new tools and methods, and then placed in a notional analytic mission within a simulated data set where thy must apply their training to solve realistic problems. Patrick enlisted in the U.S. Marine Corps in 2000 serving as a Combat Engineer and Civil Affairs Specialist. Following his military service Patrick earned his B.A. in International Affairs (and Certificate in GIS) from the University of Mary Washington, then his M.A. in Security Studies from Georgetown University. Patrick has worked as a collection manager and ABI analyst gaining hands on experience in the skills and mindset important to effective intelligence operations. Patrick embodies the bridging of analysis and data science, teaching himself enough coding and data science to be dangerous, but effective, when confronting modern massive, diverse data sets.
Given the ever-increasing amount of commercially available spectral sensors and geo data sources in general Computer Vision (CV) and Machine Learning (ML) approaches are becoming more desired by the GEOINT community. However, the heuristics and rules that apply to small scale applications of these approaches do not necessarily hold when deploying a model at planetary scale. This class will demonstrate the techniques required to train an ML-based CV model and deploy it at scale across a large region. The results will be analyzed and considerations for global-scale analysis will be discussed. The student will leave this class with practical skills of training a CV model, deploying it at scale, and with initial understanding of some of the design trades required when working across large geographic areas. The class will demonstrate the level of ease of interoperability in order to encourage young professionals as well as demonstrate advanced capabilities for Senior Scientists and will focus on areas of interest to the GEOINT community for feature and object identification.
This session/class is designed to provide critical skills to each attendee. It is expected that each student will have differing level of skills for each of the key elements of the class, however the diversity in the demonstration should provide everyone with some new knowledge including:
- An understanding the current and future commercial sensor environment
- Hands-on guided practice in building, training and deploying CV models
- An understanding of the engineering and analysis plans required to perform analysis at global scale
- An understanding of the analysis trade-offs in building models for global-scale analysis
- Calibrated expectations of what is possible with modern tools, and what remains difficult
- Explore the use of basic tools and its impact recently being incorporated in a geo-computation course curriculum.
John Shriver – Team Lead – Platform, Descartes Labs, Inc. Experienced Data Scientist with a demonstrated history working in a variety of different domains. Skilled in experimental design, spatial statistics, GIS, and the use of machine learning algorithms to solve business problems. Strong engineering professional with a Master of Science (M.S.) focused in Environmental Informatics from University of Michigan.
Mark Speck - Huntington Ingalls Industries and Assistant Professor in Data Science at Chaminade University of Honolulu. Experienced scientist who is practiced at obtaining and analyzing multimodal forms of data. Skilled experience ranging from biological sciences to data science from finance, demographics, environmental, and socioeconomics. Currently focused on developing the Data Science program at Chaminade University to include GEOINT certification and minor opportunities. Instruction methodology focuses on fundamental concepts in data science that are applied to real world problems helping students create reproducible and ethical research. PhD in Microbiology University of Hawaii at Manoa.
Dr. Shawana P. Johnson, GISP – Global Commercial Geospatial Subject Matter Expert, Government Advisor, Descartes Labs, Inc. Globally known commercial geospatial author and speaker focused on bridging commercial geospatial technology to the government and providing advanced technology transfer thought leadership as we advance in digital transformation to 5G and IOT. Experienced data scientist in global food security. Doctorate from Case Western Reserve University in Management and Certificate, Harvard University 2019, Kennedy School of Government, Senior Manager in Government.
Thursday, Oct. 7
Click on the training session title below for more information.
7:30 – 8:30 AM
Begin simplifying your workflow and increasing interoperability with Planet data. This hands-on workshop will walk attendees through the basic understanding of Planet’s capabilities and how organizations can increase their agility and enhance light-weight insight generation. Attendees will learn how to easily generate, evaluate, and engage with hosted data on Planet’s interface. The course will then dive into how analysts can optimize interoperability, storing data once and sync across Planet web-apps, multiple GIS environments, and supported partner platforms like Google Earth Engine (GEE). This well rounded workshop will cover the basic fundamental understanding of Planet data and will then walk through the generation and engagement with information by an Imagery Analyst utilizing real life use cases. This course will be dynamic and engaging for attendees turning them into active participants in the learning process. Upon completion, each attendee will walk away with not only fundamental understanding of Planet data but also, how to work with it in partner platforms.
In this session, attendees will learn:
- How to order, manage, and assess data all in one system
- How users can utilize tools like color enhancement and spectral analysis quickly within Planet Explorer
- How to sync data across platforms like GEE
- Interacting with Planet data in GEE
Prerequisites: None Needed
Ricky Rios is the Director of Customer Success at Planet Federal where he has been working for over 5 years. Prior to Planet, Ricky worked at Harris Geospatial Solutions on a multi-modal content management and dissemination system for which he deployed to Iraq and Afghanistan back in 2011 and 2012 respectively. He received his BS and MS in electrical engineering specializing in Applied Electromagnetics from the University of Puerto Rico at Mayaguez. He also serves as an Intel Officer in the United States Navy Reserves.
Monica Weber is a Senior Manager of Customer Manager at Planet Federal. Prior to Planet, Monica worked at Dataminr, and has an extensive background in Open Source Information/Intelligence (OSINT). She holds a BA in Economics from the College of the Holy Cross and a Certificate in Cartography, GIS, and Remote Sensing from Michigan State University. She also serves as an Officer in the United States Navy Reserve.
Analysis-Ready Data (ARD) is essential to ensure consistency of AI/ML algorithms. With increased demand for imagery and derived insights for foundational GEOINT as well as tactical ISR, it’s necessary to ensure data from multiple sources is curated for consistency to perform automatic target recognition and broad area surveillance. Data cubes comprise of spatially aligned and spectrally calibrated image scenes. Data cubes created from medium-resolution Landsat and Sentinel-2 satellite imagery are examples of ARD. Similar techniques can be extended to high-resolution imagery to create consistent data for various applications, including multi-temporal image analysis for battle damage assessment, change detection, etc. Analysts spend up to 80% of their time on data curation. ARD streamlines this process to help analysts focus on intelligence gathering.
Spatial accuracy is a key tenant of ARD for baseline imagery and precise registration of multi-temporal imagery. In this training, analysts will learn about various spatial errors associated with imagery. Image-to-image registration ensures temporal precision between images. This training will highlight the strengths and limitations of current techniques. Spectral consistency of images over geography and time is another key tenant of ARD. Atmospheric compensation is a key step to ensure image consistency over varying atmospheric conditions and sensor viewing geometries. Analysts will learn fundamentals of atmospheric correction including surface reflectance and top of the atmosphere reflectance.
Learning objectives include various aspects of data preparation for AI/ML algorithms, and understanding typical issues faced with practical examples. Basic knowledge of remote sensing, satellites, and imagery as prerequisites.
This training will demonstrate the benefits of ARD and resulting accuracy impacts on feature extraction using practical examples. The session will also highlight the importance of ARD for the portability of AI/ML models.
Dr. Navulur is Sr. Director of Strategic Growth at Maxar. He received his Ph.D. from Purdue University and was recognized as a 2019 Distinguished Purdue Alumni. He sits on the board of directors of the Open Geospatial Consortium. He also represents private industry at the United Nations Group of Experts in Global Geospatial Information Management. Kumar has more than 25 years of experience in the geospatial industry.
Artificial intelligence (AI) and machine learning (ML) continue to revolutionize analysis and decision-making for warfighters; however, the need for massive amounts of labeled data hinders ML’s usefulness due to the expense of labeling and the difficulty of finding instances of rare objects. In many operational domains, targets of interest often have few examples for training deep learning algorithms, leading to unacceptably poor performance by automated detectors. The ML and computer vision research communities have been addressing these challenges through new methods for low-shot learning, domain transfer, domain adaptation, training data augmentation, and data generation. Much of this research has been funded by federal R&D programs such as DARPA Learning with Less Labels, AFRL VIGILANT, and NGA SBIRs, with application to satellite imagery, wide-area motion imagery, and full-motion video. This tutorial will describe both unsupervised and supervised methods that attack the problem from multiple perspectives, including leveraging labels in one modality to improve detection in another modality; active learning to find rare objects or activities in large amounts of data; low-shot image and video object detection/tracking and classification via attributes; and discovering salient anomalies. These advances are now available in software tools, enabling analysts to leverage them to create accurate detectors for rare objects with only a handful of exemplars, without programming or AI/ML expertise. Current open-source platforms will be described, with a demonstration of the Video and Imagery Analytics for the Marine Environment (viametoolkit.org) toolkit, developed by Kitware on NOAA and DoD funding.
In this course, attendees will learn about the challenges faced by machine learning approaches for analytic problems when small amounts of training data are available. Solutions for these problems will be discussed in detail, covering the algorithmic underpinnings at a high level, and - more importantly - the currently available tools analysts can use now for a variety of domains.
No prior knowledge of machine learning, AI, or programming is required. Attendees should have some experience with or knowledge of detecting objects or activities in imagery, video, or other data, from the end-user, operator, or analyst perspective. The course will be oriented towards analysts with expertise in satellite imagery, FMV, WAMI, radar imagery, and other ISR modalities.
Instructor: Dr. Hoogs leads Kitware’s Computer Vision Team, which he started after joining Kitware in 2007 and now has over 70 members with more than 25 PhDs. For more than two decades, he has supervised and performed research in various areas of computer vision and machine learning including: object detection and recognition; scene understanding; image and video segmentation; multimedia and content-based retrieval; event, activity and behavior recognition; object tracking. He has led projects sponsored by commercial companies and government entities including DARPA, AFRL, ONR, IARPA and NGA, ranging from basic research to developing advanced prototypes and demonstrations installed at operational facilities. He has initiated and led more than two dozen government contracts in video and motion analysis, involving more than 15 universities and multiple government institutions. He has worked with premier researchers in low-shot learning, domain transfer and deep learning at top universities including UC Berkeley, the University of Maryland, Columbia University, and Boston University. At GE Global Research (1998-2007), Dr. Hoogs led a team of researchers in video and imagery analysis on projects sponsored by DARPA, Lockheed Martin and NBC Universal. He has published more than 80 papers in computer vision and machine learning, and frequently organizes conferences and workshops in these fields. He has served on expert panels for DARPA, NOAA, NSF and the National Academies.
Imagery interpretation is the act of visually examining an image to identify, characterize, and assess the significance of objects, buildings, vehicles, natural features, etc. Attendees will gain practical knowledge of image interpretation which they can apply to all missions, problems, and use cases. Experienced analysts can use this course as a refresher while non-analyst attendees may want to broaden their skillset for a more universal GEOINT approach to their career. Machine learning and computer vision engineers can use this course to gain background and context, enabling better development of geospatial training datasets.
Andrea Simmons a Full-spectrum Imagery Methodologist with over 15 years’ experience in Imagery Analysis and Geospatial Analysis, including instructing IMINT/GEOINT, from serving as Active Duty Air Force and contracting with the DOD. Her experience covers Full Spectrum/Multi-Modal imagery data from space-borne, airborne, commercial, and national sensors. She is forward-leaning, providing subject matter expertise to apply existing and emerging technologies and capabilities to solve customer intelligence problems. Additionally, she is passionate about education and training, especially in the GEOINT field. Andrea holds two USGIF certifications: CGP-R and CGP-G.
Geospatial data is more than just dots on a map. Geospatial data comes in a variety of different formats from a variety of different sources with detailed metadata. Utilizing different visualization and analytical tools can expose more information about geospatial data than just the pure location. Charts and graphs such as timelines, treemaps, chord diagrams, link charts, data clocks, and heat charts compliment geospatial data and can provide more context into your data Learn how to gain insight into geospatial data with both ArcGIS Insights and ArcGIS Dashboards to create meaningful information products derived from geospatial data.
- Connect to various data sources and prepare data for analysis
- Visualize, interact with, and analyze data in various types of charts, graphs, and viewers
- Share analysis results and workflow models
- Username for ArcGIS Online (can sign up for a free 60-day trial for ArcGIS Online), License for ArcGIS Insights (can sign up for 21-day ArcGIS Insights trial license)
Matt Berra is a Senior Solution Engineer on the Intel team. He has spent over 17 years supporting geospatial programs within the Intelligence Community and the Department of Defense. His focus areas include geospatial development and programming, spatial analysis, and integration with federal systems and datasets.
The Microsoft Planetary Computer is a platform that lets users leverage the power of the cloud to experiment and develop machine learning workflows on geospatial data. The Planetary Computer consists of four major components:
- The Data Catalog, which includes petabytes of data about Earth systems, hosted on Azure and made available to users for free.
- APIs that allow users to search for the data they need across space and time.
- The Hub, a fully managed computing environment that allows GEOINT professionals to process massive geospatial datasets.
- Applications, built by our network of partners, that put the Planetary Computer platform to work for environmental sustainability.
This session will provide a hands-on tutorial for GEOINT professionals to use capabilities within the Planetary Computer find geospatial data of interest and train, test, and deploy machine learning models. Students will be provided access to the cloud-based Planetary Computer and will learn how to use the Data Catalog to retrieve raw raster data through the STAC API. Using the selected imagery, and existing training data sets, they will then build machine learning models in Jupyter Notebooks. The model outputs will then be available for further analysis and visualization.
- Discovering and exploiting open source geospatial datasets in the Planetary Computer
- Applying common computer vision models to geospatial data
- Analyzing and visualizing results from computer vision models
- Please bring your own laptop, no special software is required
Rob Emanuele is a geospatial architect at Microsoft where he works on applying geospatial technology to environmental sustainability use cases as part of the AI for Earth program. Previously he was Vice President of Research at Azavea, a B Corporation focused on products and services using open source geospatial software. Rob is a member of the SpatioTemporal Asset Catalog project steering committee and was previously a Technical Fellow for the Radiant Earth Foundation supporting the advancement of the STAC ecosystem.
2:00 – 3:00 PM
Orbital Insight will offer hands-on training showing how geospatial and geolocation data processed using a suite of Orbital Insight’s software tools that employ computer vision, artificial intelligence, and big data processing tools can be best used in the Esri ArcGIS platform. The training will also address visualization principles to generate products that represent information that can be easily understood by decision-makers. Course participants will generate and visualize geolocation data and heat maps processed with data from multiple imagery providers and non-imagery sources such as AIS and GPS/WiFi signals. They then will be trained on how to layer these analytic results within the ArcGIS platform.
The training session will involve use cases where participants will create geospatial
automation workflows at scale. It also will address how data can be statistically interpreted and overlaid with additional data sources or exported into GIS tools via API and ultimately tip and cue additional follow-on research for in-depth analysis. Participants will learn how to augment existing GEOINT workflows through automated data creation. They will practice using various algorithms to analyze specific areas of interest over time to show patterns and trends. Finally, participants will learn how to run Orbital Insight’s traceability algorithm—which tracks trends in movement to and from geographic areas—and show these results within ArcGIS.
After the training session, course participants will be able to:
- Generate and visualize geolocation data and heat maps built using multiple imagery and
- non-imagery sources
- Layer analytic results in the ArcGIS platform
- Create geospatial automation workflows at scale
- Address how data can be statistically interpreted
- Address how data can be overlaid with additional data sources and exported to GIS tools
- Tip and cue additional follow-on research using new techniques
- Use automated data creation to augment existing GEOINT workflows
- Use algorithms to show patterns and trends in geospatial and geolocation data over time
- Use a traceability algorithm to track trends in movement across geographic areas and share the results in ArcGIS
Prerequisites: Participants are not required to have prerequisite knowledge or skills. Orbital Insight will provide temporary access to its software tools for the purposes of this training.
Olivia Koski is a Solutions Engineer at Orbital Insight, serving intelligence clients in the U.S. government. Her work focuses on delivering AI-powered analytical insights using commercial satellite and geolocation data to inform decision makers monitoring everything from foreign military activity to illegal mining and development abroad. Prior to joining Orbital Insight, she ran her own science communications company, published a book on space travel, and wrote about technology for national magazines.
This workshop leverages popular open source projects, GeoServer and PostgreSQL including the PostGIS extension, to give participants a hands-on introduction to deploying a geospatial server on cloud-based infrastructure. GeoServer is an open source, Java-based server that allows users to view and edit geospatial data. GeoServer leverages Open Geospatial Consortium (OGC) standards to facilitate sharing of geospatial information.
In this workshop, we will explore different services, tools, and techniques to deploy and scale GeoServer in the cloud. We will introduce some cloud services that will be leveraged and then provide participants with step-by-step instructions on how to launch GeoServer in the cloud leveraging services that make it easier to monitor, update, and scale to meet the compliance requirements of a regulated production environment.
GeoServer can leverage a variety of data sources including support for a PostgreSQL database with the PostGIS extension. PostGIS is a spatial database extender for PostgreSQL object-relational database that underlies many open source projects. We will explore connecting GeoServer to a cloud-based PostgreSQL and PostGIS information source leveraging open data to illustrate a multi-tier scalable cloud- based architecture. Come build with us!
- Introduction to GeoServer and PostgreSQL with the PostGIS extension 2. Introduction to cloud-based architectures and deployments
3. Create a PostgreSQL database with the PostGIS extension
4. Deploy a scalable GeoServer in the cloud
- Modify necessary security configurations to allow GeoServer to access the database
- Add PostgreSQL as a data store in GeoServer
Prerequisite Knowledge and Skills
Participants should be somewhat familiar with GeoServer and PostgreSQL 2. Participants will need a laptop and be able to login to the AWS Management Console
Luke Wells is a Solutions Architect at Amazon Web Services where he focuses on helping national security customers design, build, and secure scalable geospatial applications on AWS. Prior to joining the Solutions Architecture team, Luke helped automate and scale the deployment of the S3 software stack in new AWS regions across the globe. Outside of work he enjoys being in the remote wilderness, cheering on the Virginia Tech Hokies, and spending time with his family.
This course will focus on the Remote Sensing and Imagery Analysis Competency of the GEOINT EBK. While discussing Imagery Analysis, various open source tools (Python, QGIS, etc.) we'll showcase various advanced techniques in Time Series Imagery Exploitation. Planet’s remote sensing capabilities have become an invaluable source of information for the geospatial community. With over 140 satellites in orbit, Planet provides daily coverage of the entire world’s landmass. Planet imagery has proven effective at timely change detection and identifying historical trends. When combined, daily medium and high-resolution imagery significantly increase the amount of information available to analysts.
This workshop will provide attendees a basic understanding of Planet’s capabilities and the type of information that can be extracted at different temporal and spatial resolutions. Attendees will learn how to easily query, download, process, and analyze daily and sudaily imagery with a variety of open source tools. Each of the steps will build upon the knowledge gained earlier in the course. We’ll demonstrate how to interpret imagery manually utilizing visual inspection and then we’ll incorporate python scripts for automation. This course will be dynamic and engaging for attendees turning them into active participants in the learning process. This course will provide an engaging and dynamic environment centered around active participant learning. Attendees will gain a fundamental understanding of Planet data as well as how to work with other highly temporal spatial datasets and their derived products.
Erik Friesen is a Pre-Sales Solutions Engineer at Planet Federal. In addition to his pre-sales duties, he conducts training events for Planet Federal customers and contributed to the early development of the Planet QGIS Plugin. Prior to joining the Planet Federal team, Erik worked as a Training Specialist at Boundless for 2 years and as a GIS Analyst in the oil and gas industry for 10 years. He received a BSc in Geoscience and graduate certificates in GIS and Remote Sensing from The University of Texas at Dallas.
Claudia Hammond is a Pre-Sales Solutions Engineer at Planet Federal, specializing in supporting the Defense Community. Before Planet, she worked for the Office of Naval Intelligence as an all-source Intelligence Analyst focusing on ELINT and as an open source intelligence analyst at the Federal Bureau of Investigation. She received a MA from Johns Hopkins in Global Security and a BA in International Studies from The University of Miami.
Bill Raymond is Planet Federal’s Pre-Sales Solutions Engineering team Manager. Bill specializes in designing systems used for GEOINT, and IMINT applications. Bill has designed systems for numerous DoD/IC, DHS, and Civilian Federal Agencies. He is a 14-year veteran in the Air National Guard as an Operational Intelligence Analyst. He volunteers as a GeoMentor, helping educators facilitate the use of GIS and Geography in the classroom. He received a Bachelor’s in Cartography and GIS from Salem State University.
Data Scientists and Data Analysts have spent the majority of their time preparing, cleaning, and transforming the data necessary for analyzing data and that is the “heavy lift” of most Data Science/Machine Learning projects. In the past 7 years, data science and machine learning platforms have emerged which have taken on much of the heavy lifting. In 2006, DARPA did a study which showed Analysts spent 58% of their time doing these activities and only 17% of their time doing analysis. That study projected automation was potential game changer. Managers and Analysts will be able to gain insight of this landscape and see demonstrations of such automation in this training course and provide insight how these platforms can improve Data Science and Machine Learning projects.
This workshop covers the current landscape of those types of platforms to illustrate how new technology such as machine learning have become embedded into many of those platforms. This landscape is rapidly changing, and this snapshot will allow decision makers to see how a senior data scientist has seen it evolve over the last 15+ years. The workshop objective to provide an overview of the landscape, demonstrate specific techniques that improve the heavy lifting and speed up the project delivery. Attendees will need some but not in-depth knowledge of data science and machine learning to benefit from this training course. Attendees who are interested in learning more about what should be in a data science or machine learning platform will also benefit.
Richard La Valley has been a data scientist and statistician for 40+ years in both commercial and government roles. He is a Data Science Principal at Parsons. He has a master degree in Statistics from Pennsylvania State University and a bachelor degree in Mathematics from St. John’s University-Minnesota. His experience with data science in big data started at the US Census Bureau, Commercial Telecommunications companies as well as government. He has authored and co-authored and presented numerous papers on spatiotemporal pattern analysis, social network analysis, advance analytical techniques for professional conferences, journals and training courses including GEOINT 2018 and GEOINT 2019.
Michael Zaun has been with Syntasa for over six years and currently is the Director of Federal Customer Success Management. In his time with Syntasa, Mr. Zaun has spent three years as Director of Product Management and three years as Director of Global Delivery and Services. He also has 15+ years’ experience in business analytics, web analytics, and information security on both the federal and commercial sides. Mr. Zaun holds a bachelor’s degree in Information Technology and has attained multiple respected professional certifications.
Technologists, engineers, analysts, data scientists, and mission leaders have been interested in the promise of Quantum Computing, a truly Emerging Competency Field. This training will depict what is available today, what is near future, and what is beyond the next few years, in an overview of quantum. The overview will depict the mechanics behind superconducting circuits, the physics’ concepts and principles of quantum interaction, applicability to difficult-to-solve technical challenges, and is followed by a demonstration of running algorithms upon an actual quantum computer over the cloud.
You will learn the essentials of Quantum Computing and emerging use cases.
None. Python notebooks will be shown for students to follow along, but no experience in Python is necessary.
Gabe Chang, firstname.lastname@example.org, Client Technical Leader, Quantum Ambassador, Federal CTO Architect. In addition to his Quantum intrigue, Gabe has approximately 30 years experience in all facets of large-scale enterprise programs in the DOD, Law Enforcement and Intelligence Communities. Gabe has served as a SME and visionary, in areas of Analytics and AI, representing technical leadership at IBM for numerous government projects and international conferences. Gabe is sought after for the trusted advisor role, technology demonstrations, and solving hard problems for his clients.
The GEOINT community is constantly challenged to increase the fidelity and timeliness of their geospatial intelligence. We refer to fidelity that is broad in scope and includes computational models and simulations; data management and analyses; machine learning and deep learning; image, text and audio processing; and a broad array of visualization capabilities. This tutorial will discuss how to increase the fidelity and scale of intelligence work to achieve results faster through the use of current and emerging computing tools and resources. Participants will learn about strategies to scale-up software and applications, enhance data governance, and enhance workflow strategies. Included will be a discussion of strategies for transitioning to advanced computing systems including high throughput computing, cloud computing and high performance computing systems. The presenters will describe exemplary case studies achieved of NGA projects that demonstrate enhanced personnel productivity, enhanced fidelity, and significantly reduced time to results. The tutorial will provide participants with references to technical information, training programs and materials, computing resources, and professional staff who can work with the community to achieve these goals and results. The tutorial is designed to benefit directors, managers, developers, researchers, and analysts.
Dr. William Kramer is Executive Director of the New Frontiers Initiative (NFI), the Blue Waters Principal Investigator and Director, and Computer Science Research Professor. Bill is responsible for all aspects of development, deployment and operation of the NSF Blue Waters system, National Petascale Computing Facility and the associated research and development activities through the Great Lakes Consortium, NSCA and University of Illinois supporting projects. This involves over 100 staff for integration and testing efforts for the Blue Waters systems and transition to full operations. The Blue Waters system is the 20th supercomputer Bill has deployed/managed, six of which were top 10 systems. Bill held similar roles at NERSC and NASA.
Galen Arnold: As a system engineer with NCSA, Galen enjoys helping people get the most out of HPC systems such as: Blue Waters and XSEDE systems with accelerators. He's part of the Blue Waters applications support team. He has a good working knowledge of most things unix/linux/hpc/networking and has a knack for debugging code. He believes k&r-c is the one true language but reluctantly admits the superior numerical performance of fortran codes.
Aaron Saxton is a Data Scientist who works in the Blue Waters project office at the National Center for SuperComputing Applications (NCSA). His current interest is in machine learning, data, and migrating popular data/ML techniques to HPC environments. His career has shifted back and forth between industry and academic ventures. Previous to NCSA he was a data scientist and founding member of the agricultural data company Agrible Inc. participating in crop model development and customer facing deployment. Before that, Aaron worked at Neustar Inc, University of Kentucky, and SAIC. In the summer of 2014, shortly after joining Neustar, Aaron graduated from University of Kentucky to earn his PhD in Mathematics by studying Partial Differential Equations, Operator Theory, and the Schrodinger equation.
Friday, Oct. 8
Click on the training session title below for more information.
7:30 – 8:30 AM
The learning objecive of this class is to increase the knowledge of the participants in relation to Geospatial Data Management and Data Visualization using the free and open source products GeoServer and MapStore. It will demonstrate how they can be used to create enterprise spatial data Infrastructures leveraging on public and widely accepted standards both de-facto (e.g. GeoJSON) as well as internationally recognized (e.g. OGC standards like WMS and WFS), and modern structures like vector tiles.
The class will cover the following:
- Introduction to the OGC standards (WMS, WFS, WMTS, vector tiles)
- Introduction to GeoServer
- Styling maps with Styled Layer Descriptor, creating styles with desktop tools and optimizing the styling to leverage GeoServer own custom extensions
- Creation of dashboards, stories and users with MapStore
- Real word enterprise deployments and recommendations for scalability and performance
Pre-requisites: Basic understating of GIS concepts.
Dr. Bermudez has more than 25 years of experience in the information and technology industry. He is the CEO of GeoSolutions US, which provides innovative, robust and cost-effective solutions leveraging on best-of-breed Open Source products. Before joining GeoSolutions, he performed as the Executive Director of the Innovation Program at the Open Geospatial Consortium, the world's leading organization on developing geospatial standards. He is also Adjunct Professor in the GIS Masters Program at the University of Maryland. Dr. Bermudez has Ph.D. and M.S. in Environmental informatics from Drexel University and an M.S. in Industrial Engineering from the Andes University in Bogota, Colombia.
In this workshop, attendees will gain hands-on experience in the end-to-end lifecycle for building and deploying geospatial machine learning models on overhead imagery. The workshop will leverage the Amazon SageMaker service as a tool to ease this process. Attendees gain experience labeling data, accelerating model training via distributed training and automated hyperparameter optimization, and deploying models trained for inference, through both unsupervised and supervised use cases. Attendees walk away from the workshop with the tools needed to train ML models, giving them the capability to deploy production capable models for their analyst teams. Attendees gain the skills needed in Competency 1 - T14. Neural Networks/Artificial Intelligence. Additionally, they gain skills required for the emerging competencies (3.3 - Machine Learning, 3.5 - Neural Networks/Artificial Intelligence) to truly accelerate the mission capabilities of their agencies.
Gain basic understanding of training/developing/deploying ML models on overhead imagery use case.
Prerequisites: Attendees are expected to have a working knowledge of Python.
Michael Liu is an Amazon Web Services (AWS) Senior Solutions Architect specializing in AI/ML with over 20 years of experience supporting the DoD/IC. Michael possesses a BSE from Duke University, MS in Engineering from University of Maryland, and an MBA and MS in Systems Engineering from the Massachusetts Institute of Technology (MIT).
Standards are the key enabler to make Geospatial Intelligence (GEOINT) data FAIR: Findable, Accessible, Interoperable, and Reusable. The role of the Open Geospatial Consortium (OGC) is to identify requirements and develop standardized means to achieve FAIR data principles. This training session will provide participants with an overview of the geospatial Standards development community, including OGC, the Defense Geographic Information Working Group (DGIWG), International Hydrographic Organization (IHO), International Organization for Standardization (ISO), and Simulation Interoperability Standards Organization (SISO). The training session will also describe the role played by the OGC Definitions Server in supporting the implementation and use of OGC and other Standards. The aim of the training session is to enable GEOINT professionals to understand the applicability of Standards-based interfaces and encoding formats to achieve their missions. By equipping GEOINT professionals with the skills to identify and apply Standards, this training course will improve the professionals’ ability to deliver the optimal content in rapid fashion to their stakeholders.
- Understanding of the fundamentals of interoperability between geospatial technologies
- Awareness of the different types of APIs that have been standardized within the OGC
- Understanding of the relationship between OGC and other standards development organizations
- Awareness of NSG profiles and the OGC Standards on which they are based
- Awareness of the OGC Definitions Server and its role in the implementation and use of OGC Standards
- Understanding of the need for compliance and interoperability testing
- Awareness of freely available compliance testing resources
Fundamental understanding of types of GEOINT data
Mr. Simmons is OGC's Chief Standards Officer. In this role, he provides oversight and direction to the Consortium’s technical and program operations and deliverables. Mr. Simmons also continues to lead the OGC Standards Program, where he ensures that standards progress through the organization’s consensus process to approval and publication. Preceding his time as a member of OGC staff, Mr. Simmons was an active member of OGC, promoting best practices in 3D Information Management (3DIM) as chair of the OGC 3DIM Domain Working Group and chairing or participating in numerous OGC infrastructure, mobility, and web services working groups. His OGC-related research has focused on data lifecycle management, integration, and dissemination. Mr. Simmons formerly worked at CACI International, Inc. and TechniGraphics, Inc. performing GEOINT-related functions. Prior to 1994, Mr. Simmons worked as a petroleum and consulting geologist.
- Prior GIS knowledge from any system is useful
- Understand core geoSQL concepts such as geographies and geometries
- Practice common overlay query patterns using ST_CONTAINS and ST_INTERSECTS
- Learn to perform proximity analyses with ST_BUFFER, ST_DISTANCE, ST_DWITHIN
- Create aggregate expressions such as counts, sums and averages
Dr. Michael Flaxman is OmniSci’s Spatial Data Science Practice Lead. He has served on the faculties of MIT, Harvard and the University of Oregon. Dr. Flaxman has been a Fulbright fellow, and served as an advisor to the Interamerican Development Bank, the World Bank and the National Science Foundation. Dr. Flaxman previously served as industry manager for Architecture, Engineering and Construction at ESRI, the world’s largest developer of GIS technology. Dr. Flaxman received his doctorate in design from Harvard University in 2001 and holds a master’s in Community and Regional Planning from the University of Oregon and a bachelor’s in biology from Reed College.
Link analysis is the process of building a network of interconnected objects through relationships to discover patterns and trends. The main objective is to find and link information from various sources and to adequately represent and estimate the relevance of this information to discover hidden relationships. Link analysis deals with people, organizations, places, events, relationships, and other intangible connections between entities. Many data elements used in link analysis have quantifiable spatial properties, such as a person's residence or a meeting's location. Link analysis is enhanced by combining this analysis with spatial and temporal analysis. Learn how to use ArcGIS Pro Intelligence to create the visualization of entities and relationships in a link chart that can be extended by incorporating the dimensions of both space and time.
- Explore some of the analytical capabilities of ArcGIS Pro Intelligence
- Learn how to discover hidden relationships in the data
- Be able to create link charts to visualize entities and relationships
- Basic GIS skills, ArcGIS Pro Intelligence 2.8 installed (can sign up for a free 21-day trial for ArcGIS Pro), Username for ArcGIS Online (can sign up for a free 60-day trial for ArcGIS Online)
Julia Bell is a National Government Solution Engineer at Esri, supporting the Defense team. She has been with Esri for 2 years, where she primarily works with defense readiness and operations customers. Prior to Esri, she received both a B.S. and M.S. in GIS at the University of Maryland, College Park.
How do you add and connect new map features to legacy geospatial data layers without damaging the geometry or attributes of baseline data? This is the challenge of the conflation process for vector geospatial data. This course will introduce the issues and potential solutions involved with fusing datasets which may have been created at different times and with different levels of precision. Participants will learn about procedures for matching and transforming data and will be given overviews of toolsets that exist for automated conflation processing.
The sources of new geospatial features are far outstripping the ability of analysts to combine existing datasets with the new data. For example, automatic feature detection and crowd sourced data may produce newer, more detailed feature geometries but inserting this new geometry into existing database schemas may create unexpected problems with previously accurate data. This course will employ use scenarios involving commonly used vector data features such as roads and utility lines. Attendees will learn methods of choosing the appropriate conflation and validation processes for different types of features.
Students should be informed on the technical issues and potential solutions involved with synthesizing geospatial datasets that may have been created at different times and with different levels of precision.
- Why is conflation important?
- Datasets need to be geometrically superior with richer attribution
- Need to integrate high volumes of data from machine learning
- Examples of problems that are solved by conflation
- Overlapping datasets
- Adjacent datasets
- What are the differences between feature matching and edge matching?
- What types of pre-processing are necessary?
- Spatial adjustments
- Transferring attributes
- Differences in conflating points, lines and polygons
- How is metadata conflated?
- What types of post-processing are necessary?
- Validation of conflated geometry and attributes
- What commercial conflation tools are available?
Ben Woolf is a Geospatial Analyst at L3Harris Technologies with extensive experience in geospatial database development projects for continental scale geographies. He has been involved in numerous geospatial database projects involving schema design, attribute development, feature merging, maintenance and user support. He specializes in conflation automation, database integration, and data validation.
1:00 – 2:00 PM
Geospatial data has changed our daily lives with its ability to map our situational awareness with only a connected smart device. We can predict events at our location or sites of interest. Material characterization, change detection, and material predictive analytics are simply the next step in target awareness. Hyperspectral imaging is not a term for the masses but the needs it addresses are. Requirements in fire risk, carbon credit, real or decoy targeting, and productivity in a crop or mine simply need a last mile simplified connection to users. Answers on a browser, phone, or in desktop application are lacking a hyperspectral data portal. For users who don’t hold a PhD a HySpecIQ is creating and delivering in its live data portal now in advance of its vehicle launch. HySpecIQ will review the fundamentals of hyperspectral imaging and its uses across varied market sectors. We will review basic workflows and how online tools and cloud hosted content can simplify the integration of hyperspectral data. Users will be able to interact with data live via a web portal.
Topics of interest may include but are not limited to:
Spatial Data Discovery, Cloud Computing, Disaster and Risk Management, Spatiotemporal Data Acquisition, Modelling, and Analysis, Earth Observation Systems
Charles Mondello, CTO, HySpecIQ, ASPRS Fellow, MS Imaging Science
David Nilosek, VP Imaging Science & SW Engineering, HySpecIQ, PhD Imaging Science
The rapid delivery of modern applications across today’s dynamic hybrid cloud environments presents an increasing cyber threat landscape, accelerating the demand for a security model that trusts nothing and authenticates and authorizes everything. Automation of the application deployment process with DevOps practices further accelerates the ability of an organization to design, test, and secure software in ephemeral and dynamic environments. This training session introduces and develops attendees' knowledge on the concept of Zero Trust.
Zero Trust security is predicated on trusted identities. In this session we will discuss the foundational technologies that enable adoption of Zero Trust methodologies and approaches. Machine authentication and authorization, machine-to-machine access, human authentication and authorization, and human-to-machine access are the four foundational categories for identity-driven controls and zero trust security. These categories each are built on the six pillars of the Zero Trust Reference Architecture. These pillars begin with a security framework centered on (1) human and machine identities that are (2) mutually authenticated and (3) authorized by application, system, or enclave. Access is then (4) bound by time. Data is then (5) encrypted across the environment to protect secrets as well as data in transit and at rest. Finally, (6) auditing and logging ensure secure tracking and storing of activity across the environment. Implementation of these six principles significantly strengthens the security posture with an automated DevOps application environment.
Learning Objectives: Attendees of this session will learn:
- How an identity-based security paradigm strengthens security controls and enables a highly dynamic and distributed application environment.
- Where the principles of zero trust security and associated automation capabilities are available today.
- How to create a Zero Trust environment.
- The applicability of Zero Trust to the GEOINT EBK Cross-Functional Knowledge & Skills and Emerging Competencies: Zero Trust Security and Automation.
Prerequisites: A general understanding of security concepts and application development.
Timothy J. Olson is a Sr. Systems Engineer at HashiCorp with over 30 years of extensive technical, managerial, and consulting experience in the design, production, programming, deployment, and support of next generation IT systems and solutions. Mr. Olson is a Technical Specialist and Technology Evangelist focusing on Federal Government customers successfully designing, adopting and providing Hybrid Cloud and Next Generation IT technology solutions.
Today’s Intelligence analyst spends much of their time acquiring, managing, and disseminating data to facilitate the integration and use of geospatial information in support of mission operations. In most cases, those duties are filled with repeated manual tasks that could potentially be automated freeing up their valuable time to focus on more productive activities. Are there opportunities to streamline the intelligence requests process, or better track the lifecycle of supplied intelligence products? This session will provide hands on training and experience with an industry leading low-code platform to leverage the Intelligence Analysts subject matter expertise to easily create workflow based mission applications that streamline operations and unlock productivity. Create a mission application workflow in less than an hour leveraging the same platform that is powering the NGA Service+, DIA Service-Central, NRO HelpNow Platforms. Use the same automation and workflow capabilities Air Force, Army, and Marine Corps Intelligence leverage for their own Intelligence Production activities.
Leverage no-code tools to create workflows, user portals, automate repeatable actions, and generate powerful reports and performance analytics visualizations; bring the same modern experience analysts have in their personal lives to the mission space. Low-code skills are part of the next generation of core competencies allowing business and mission users to create powerful applications. Applications that deliver meaningful impacts in the productivity, experience, and actionable insights to the day-to-day operations of Intelligence production and analysis - don’t miss this training opportunity to understand how automation fits within the EBK Emerging Technologies competency.
- Learn how to develop workflow based mission applications on a low-code/no-code platform in less than 1 hour
- Create workflows, user portals, automate repeatable actions and generate powerful reports and performance analytics visualizations
- See how applications can deliver meaningful impacts in productivity, user experience and data analysis
Michael Slabodnick is Platform Solution Consultant at ServiceNow. He first started working on technology through the service desk and eventually making his way to ITSM process consulting. In 2011, he broke ground in ITSM by building out gamification on the ServiceNow Platform in order to improve efficiency and work experience for the service desk staff at Omnicare, prior to their acquisition by CVS. From there, he went on to ITSM process and ServiceNow consulting, joining ServiceNow as a solutions consultant in 2015. For the past 3 years, he has been on the Creator Workflow team where he supports DoD customers in realizing the value with creating custom business applications on the ServiceNow Platform.
The space industry has been experiencing significant changes over the past decade. These rapid changes necessitate methodologies to navigate the increasingly complex design and objective spaces associated with constructing complex architectures. Decision making for these complex problems are fraught with challenges: stakeholders may not clearly know what they want; have different interests, multiple perspectives, and underlying assumptions; and lack of knowledge about what is possible colors judgments about what is desirable. This presentation will discuss a decision support process that uses evolutionary algorithms to derive solution sets for complex problems involving multiple interrelated factors. Based on the principles of biological evolution, these algorithms naturally select the most suitable options within a large and complex design space. The goal is to help decision makers understand the benefits and regrets of key trade-offs, building consensus, and making sound decisions supported by data. Application of this iterative, data-driven, decision support process has been extremely impactful for a wide variety of customers over the course of the last decade. Common applications include constellation design, orbital replenishment planning, satellite trajectory optimization, ground site optimization, and resiliency. In each of these areas, application of this process has demonstrated the capacity to unseat decades of perceived design optimality in favor of new and innovative approaches for solving modern problems.
- Brief discussion about The Aerospace Corporation and our mission
- Learn about the GRIPS Decision Support Process
- Learn about the mechanics and terminology associated with multi-objective optimization
- Learn about classical optimization methods and the subsequent development of evolutionary heuristics
- Understand where these search and optimization algorithms fit into the spectrum of optimization methods
- Learn how and when to apply the GRIPS Decision Support Process
- Review historical examples of where and how this process has been successfully applied
Mr. Kyle Hanifen received his B.S. in Mathematics at The College of William in 2007 and M.S. in Applied Mathematics at The University of Colorado at Boulder in 2010, and is currently pursuing an M.S. in Data Science at Northwestern University. In 2010, Kyle joined the Aerospace Corporation in the Performance Modeling and Analysis Department. From 2015 - 2018, Kyle served as the Passive Sensor lead for Imagery Programs Division Future Architectures program office, supporting future system requirements development and architecture optimization studies. In 2017, Kyle returned to the Performance Modeling and Analysis Department, where he received a promotion to Project Leader in 2019. Since joining Aerospace, Kyle’s primary focus has been on architecture coverage and collection performance analysis, architecture optimization, and multidimensional visualization and analysis. Kyle has applied his expertise to a broad range of customers including the National Reconnaissance Office, National Geospatial Intelligence Agency, Office of the Director of National Intelligence, National Air and Space Intelligence Center, and the Air Force Space and Missile Center.
Location-based services, Internet of Things (IoT) devices, autonomous vehicles and military robots require location information to be current, reliable, and actionable. However, the current state of practice for describing the spatial accuracy of location is insufficient to capture the error sources during data capture at the sensor level, the necessary ancillary data used for processing the location data, and the inherent errors in the data transformations (projections, resampling, warping, etc.) necessary to register, extract, and identify the feature content that is needed for location-based services. This course will review data accuracy standards and provide case studies in data curation, qualification, and certification, including policies, frameworks, and methods that can be applied to any multi-INT fusion effort. Examples will illustrate the limitations when spatial inaccuracy dominates the fusion process, and recommend best practices during data acquisition opportunities to maximize the data usability in a curated data ecosystem suitable for multi-INT collaboration.
Learning Objectives: The proposed course will review current geodetic standards for geolocation accuracy and their limitations for error propagation and data fusion and link these to standards for location-based services such as GPS and GNSS. Standard methods for independent verification and validation of geolocation data including American Society of Photogrammetry and Remote Sensing (ASPRS) Accuracy Standards (2014) and MIL-STD 600001 (1991) will be used to illustrate the value of routine geolocation testing in multi-source/multi-domain data integration. Examples of data spoofing will be used to illustrate the pernicious nature of unintended and intentional data manipulation. Simple methods for cross-domain data validation will be used to illustrate easy access points for multi-source test, evaluation, verification, and validation. Examples of USG data qualification and certification will be used to identify key opportunities for data acquisition efforts to include accuracy standards and data qualification and certification. Case studies from USGS and FEMA highlight best acquisition practices that enable the establishment of a curation schema for data ecosystems within current acquisition processes.
Desired outcome: This course will discuss methods for combining error estimates and error budgets that normalize and conflate error estimates into a common framework so that all of the data associated with a geolocation event such as the Ukrainian Airlines PS752 event in 2020, can be integrated into a multi-domain, multi-source, multi-INT common operating picture where the error sources do not interfere with the rapid development of situational awareness.
Dr. Abrams holds B.A, M.A., & Ph.D. degrees in Physics from the University of California at Berkeley. In addition to authoring 54 papers in refereed journals, Dr. Abrams is an international leader in high resolution spectroscopy, active and passive remote sensing, geolocation and co-authored Fourier Transform Spectrometry. Since 9/11, he has been an embedded contract scientist, focusing on geolocation with ISR systems and rapid transition into operations. He has deployed with operational teams on four continents, most recently in Africa. He has been married for 33 years and is the father of four adopted sons, each from a different continent.
Geoprocessing workflows on massive datasets such as imagery, digital elevation models, point clouds, and telemetry data can take days to weeks to process when using a single workstation or large server. However, these workflows can easily be scaled with little to no code changes to the geoprocessing workflows by encapsulating those processes within a container and distributing the jobs across HPC clusters utilizing SLURM Workload Manager, a free and open-source job scheduler.
As an example, processing thousands of tiled DEMs to create a global hillshade can take days on the fastest and largest single node servers. However, by scaling simple GDAL processes on each tile in parallel across multiple nodes in a cluster, runtime is reduced from days to hours. This is only one of many examples that can be applied to workflows such as imagery classification, object detection, vector-to-raster or raster-to-vector conversion, as well as ETL pipelines.
- An introduction to HPC
- How to encapsulate geoprocessing workflows in a container
- How to scale containerized workflows in HPC using SLURM
Eric Adams is a Geospatial Functional Expert and Program Manager at General Dynamics Information Technology (GDIT) where he applies his over 22 years of experience in Geospatial Intelligence and GIS to deliver cutting edge solutions for multiple government contracts in the Intelligence Community and Department of Defense.
Eric’s prior experience includes serving active duty in the US Army as a Geospatial Engineer completing combat deployments to both the Middle East and Southwest Asia. He has also served as an NGA-Certified Senior Instructor for the U.S. Army’s Geospatial Engineer Advanced Training courses. Eric has successfully managed and instructed over 300 US soldiers in Geospatial Intelligence Analysis, GIS and Intelligence Analysis including Imagery and FMV.
Eric earned his master’s degree in Geospatial Information Sciences and Technologies from North Carolina State University. He earned an additional master’s degree in Intelligence Operations and Analysis as well as his bachelor’s degree in Intelligence Studies from American Military University.
Steven Cocke is a Data Scientist for GDIT. In this role, he applies GPU-accelerated deep learning to high-resolution satellite imagery within a multi-node/processing High Performance Compute (HPC) environment to automate object detection, model management/versioning, model training, and visualization.
Prior to joining GDIT, Steven served a variety of roles in both the private and public sector ranging from business intelligence, data analysis, and systems engineering. His experience has led to a wide breadth of technical knowledge to include expertise in Slurm, Labelbox, Roboflow, Python, Linux, Tensorflow, Pytorch, OmniSci, Elasticsearch, Weights & Biases, and Relational/Non-Relational Databases. He is also a Certified SAFe 5 Practitioner.
Steven earned his bachelor’s degree in Systems and Information Engineering from the University of Virginia and continued his education at Southern Methodist University, where he received his master’s degree in Data Science with a concentration in Machine Learning. Outside of work, Cocke serves as a Data Science Teaching Assistant at The Knowledge House, an educational institution in New York that’s committed to uplifting underserved communities by teaching digital skills, like coding and design.