A use case example of collaboration on Covee

How can you improve cancer detection using machine learning? In this post we will analyze the team and work flow required to answer this question.

From our latest posts, you will be convinced by now that data-driven knowledge work requires complementary skills and knowledge workers with different areas of expertise or research.

At Covee Network, we provide a talented pool of data scientists, computer scientists, software/dev engineers, industry domain experts across various fields, etc. Matching talented people to data-driven projects can now be done efficiently on our network.

Our goal in this post is to describe a use case of data-driven knowledge work and team collaboration. We have already described in our solution the smart contracts and governance mechanisms that we have designed for automated team coordination, fair peer-to-peer review and stake and client fee payout.

With these mechanisms in place, Covee knowledge workers can select their role in the data-driven project, be motivated to contribute and collaborate and be fairly rewarded for their contribution upon delivery of the project.

Workflow of a personalized cancer treatment

The list of use cases for knowledge work collaboration is long: algorithmic trading, business analytics, engineering, genomics, medical research, physics and many more. We specifically picked an interesting example from the rapidly evolving field of cancer diagnostics called Radiomics.

Our problem requires multiple skills and tasks: coding skills, data sets, domain expertise, mathematical and statistical analysis and visualization tasks. And in order to solve it, various practitioners with cross-disciplinary skills need to be involved from project ideation all the way to delivery. The client in this case is a Cancer Treatment and Research Center seeking a personalized treatment for a patient.

Knowledge workers needed for this project and the percent of work they need to contribute

Here is a short summary of the workflow before we jump into more details:

Individual project roles and tasks are confirmed, team is formed Data required for analysis is aggregated by the clinician. After that an experienced radiologist contours the tumor areas as regions and volumes of interest Through the use of advanced machine learning and statistical algorithms an imaging data engineer gains insights. Then, the biostatistician validates the insights and presents them in formal visualization and documentation Project is delivered, peer-to-peer reviews are submitted and rewards are paid out according to each team member’s contribution

A short background on Radiomics

Radiomics and Radiogenomics are data science-driven techniques applied in cancer diagnosis and treatment monitoring. Radiomics is the analysis of standard images acquired from CT, MRI, PET scans to extract and characterize large amounts of quantitative features from tumours and normal tissues.

It uncovers hidden biological information often impossible to detect by the naked eye in traditional radiologist analysis. Radiogenomics consequently involves the mining of radiomic data to detect correlations to genomic patterns.

Stage 1: Project ideation and team formation

To initiate a radiomics project the team of collaborators will conduct a feasibility review of the project specification and deliverables.

Key project requirements, data and success metrics are specified. And individual roles and tasks are confirmed.

Stage 2: Data acquisition & exploration

At this stage the data required for analysis is aggregated, acquired and pre-processed. In tumor segmentation, an experienced radiologist or radiation oncologist or an AI algorithm contours the tumor areas as region of interest (ROI) and volume of interest (VOI).

Following segmentation, semantic and computational features related to tumor heterogeneity are extracted, these include; tumor intensity, tumor shape, tumor shape, tumor texture and wavelet. The stability criteria for feature selection is performed by methods such as RIDER test/retest and multiple delineation.

In meeting clinical data protection regulations several data formatting and database standards exist, including; DICOM, Picture Archiving and Communication (PACs), Medical Imaging Interaction Toolkit (MITK), and National Biomedical Imaging Archive (NBIA) to store pre-processed data.

Stage 3: Research and development & validation

The imaging data engineer gains insight from the extracted quantitative information by using advanced machine learning and statistical algorithms.

The biostatistician validates the models and results by cross validation and out of sample techniques.

Validated results and models are presented to the team in formal visualization and documentation.

Completion of specified project metrics are verified by consensus of all team members in order to proceed to project delivery and conclusion.

Stage 4: Delivery & Payout

Covee Network enables automated data synchronization points at various stages in the workflow.

A final project document is generated from the workflow documentation maps to complete the delivery report and the clinical outcome that was agreed upon.

Team members can subsequently review each other based on contribution and quality of work. Finally, staked tokens and client fee are being distributed accordingly.