diff --git a/mevislab.github.io/content/about/about.md b/mevislab.github.io/content/about/about.md index b99e66ffe..accd618d1 100644 --- a/mevislab.github.io/content/about/about.md +++ b/mevislab.github.io/content/about/about.md @@ -19,7 +19,7 @@ Hints common mistakes or steps you should consider beforehand. {{}} ## Keyboard Shortcuts -Keyboard shortcuts are incorporated like this: {{< keyboard "CTRL" "ALT" "2" >}}. +Keyboard shortcuts are incorporated like this: {{< keyboard "Ctrl" "Alt" "2" >}}. ## Networks The networks shown and used in the tutorials can be found in the [Examples](examples) section of this page. diff --git a/mevislab.github.io/content/contact.md b/mevislab.github.io/content/contact.md index de7bda0e6..5ff54e2c2 100644 --- a/mevislab.github.io/content/contact.md +++ b/mevislab.github.io/content/contact.md @@ -10,11 +10,11 @@ draft: false Having any questions on MeVisLab Licensing? Please contact the [MeVisLab Sales Team](mailto://sales@mevislab.de) #### MeVisLab Forum -Searching for a forum to ask your specific MeVisLab questions? Having trouble with functionalities? Ask [here](https://forum.mevislab.de)! Someone else might know the answer. If not - one of our developers will help you out! +Searching for a forum to ask your specific MeVisLab questions? Having trouble with functionalities? Ask [here](https://forum.mevislab.de)! Someone else might know the answer. If not — one of our developers will help you out! #### General Questions General questions regarding MeVisLab? Don't hesitate to contact the [MeVisLab Team](mailto://info@mevislab.de). #### YouTube -Also: If you haven't yet - have a look at our [YouTube Channel](https://www.youtube.com/channel/UCUGi64NseroIGjga8l7EX8g). You will find a variety of helpful tutorials provided to you by the MeVisLab Team. +Also: If you haven't yet — have a look at our [YouTube Channel](https://www.youtube.com/channel/UCUGi64NseroIGjga8l7EX8g). You will find a variety of helpful tutorials provided to you by the MeVisLab Team. diff --git a/mevislab.github.io/content/examples/basic_mechanisms/contour_filter/index.md b/mevislab.github.io/content/examples/basic_mechanisms/contour_filter/index.md index 0079a1aa1..9880e3051 100644 --- a/mevislab.github.io/content/examples/basic_mechanisms/contour_filter/index.md +++ b/mevislab.github.io/content/examples/basic_mechanisms/contour_filter/index.md @@ -13,7 +13,7 @@ Additionally, the images are modified by a local macro module `Filter` and shown In order to display the same slice (unchanged and changed), the module `SyncFloat` is used to synchronize the field value startSlice in both viewers. The `SyncFloat` module duplicates the value Float1 to the field Float2 if it differs by Epsilon. -![Screenshot](examples/basic_mechanisms/contour_filter/image.png) +![Example network](examples/basic_mechanisms/contour_filter/image.png "Example network") # Download You can download the example network [here](examples/basic_mechanisms/contour_filter/ContourFilter.zip) diff --git a/mevislab.github.io/content/examples/basic_mechanisms/macro_modules_and_module_interaction/example1/index.md b/mevislab.github.io/content/examples/basic_mechanisms/macro_modules_and_module_interaction/example1/index.md index 5dc96939d..0007832c9 100644 --- a/mevislab.github.io/content/examples/basic_mechanisms/macro_modules_and_module_interaction/example1/index.md +++ b/mevislab.github.io/content/examples/basic_mechanisms/macro_modules_and_module_interaction/example1/index.md @@ -8,7 +8,7 @@ category: "basic_mechanisms" This example contains an entire package structure. Inside, you can find the example contour filter for which a panel was created. ## Summary -A new macro module `Filter` has been created. Initially, macro modules do not provide an own panel containing user interface elements such as buttons. The *Automatic Panel* is shown on double-clicking the module providing the name of the module. +A new macro module `Filter` has been created. Initially, macro modules do not provide an own panel containing user interface elements such as buttons. The *Automatic Panel* is shown on double-clicking {{< mousebutton "left" >}} the module providing the name of the module. In this example we update the *.script* file of the `Filter` module to display the Kernel field of the `Convolution` module within its network. diff --git a/mevislab.github.io/content/examples/basic_mechanisms/macro_modules_and_module_interaction/example2/index.md b/mevislab.github.io/content/examples/basic_mechanisms/macro_modules_and_module_interaction/example2/index.md index 39ee2a25f..45bd1e412 100644 --- a/mevislab.github.io/content/examples/basic_mechanisms/macro_modules_and_module_interaction/example2/index.md +++ b/mevislab.github.io/content/examples/basic_mechanisms/macro_modules_and_module_interaction/example2/index.md @@ -12,9 +12,9 @@ A new macro module `IsoCSOs` is created providing two viewers in its internal ne To showcase how Python functions can be implemented in MeVisLab and called from within a module, additional buttons to browse directories and create contours via the `CSOIsoGenerator` are added. Lastly, a field listener is implemented that reacts to field changes by colorizing contours when the user hovers over them with the mouse. -![Screenshot](examples/basic_mechanisms/macro_modules_and_module_interaction/example2/image2.png) +![Example network](examples/basic_mechanisms/macro_modules_and_module_interaction/example2/image2.png "Example network") -![Screenshot](examples/basic_mechanisms/macro_modules_and_module_interaction/example2/image.png) +![Coloring a contour with Python](examples/basic_mechanisms/macro_modules_and_module_interaction/example2/image.png "Coloring a contour with Python") # Download The files need to be added to a package. You can download the example network [here](examples/basic_mechanisms/macro_modules_and_module_interaction/example2/ScriptingExample2.zip) diff --git a/mevislab.github.io/content/examples/basic_mechanisms/viewer_application/index.md b/mevislab.github.io/content/examples/basic_mechanisms/viewer_application/index.md index 870902907..e51756db3 100644 --- a/mevislab.github.io/content/examples/basic_mechanisms/viewer_application/index.md +++ b/mevislab.github.io/content/examples/basic_mechanisms/viewer_application/index.md @@ -7,7 +7,7 @@ category: "basic_mechanisms" # Example 3: Creating a Simple Application In this example, you will learn how to create a simple prototype application in MeVisLab including a user interface (UI) with 2D and 3D viewers. -![Screenshot](examples/basic_mechanisms/viewer_application/image.png) +![Simple application with a 2D and a 3D viewer](examples/basic_mechanisms/viewer_application/image.png "Simple application with a 2D and a 3D viewer") # Download You can download the example network [here](examples/basic_mechanisms/viewer_application/viewerexample.mlab) diff --git a/mevislab.github.io/content/examples/data_objects/contours/example1/index.md b/mevislab.github.io/content/examples/data_objects/contours/example1/index.md index 10754a876..74e8094b8 100644 --- a/mevislab.github.io/content/examples/data_objects/contours/example1/index.md +++ b/mevislab.github.io/content/examples/data_objects/contours/example1/index.md @@ -9,7 +9,7 @@ Contours are stored as Contour Segmentation Objects (CSOs) in MeVisLab. This example highlights ways of creating CSOs using modules of the `SoCSOEditor` group. {{}} -You may want to look at the glossary entry on [*CSOs*](glossary/#contour-segmented-objects). +You may want to look at the glossary entry on [*CSOs*](glossary/#contour-segmentation-objects). {{}} The `SoCSOEditor` module group contains several modules, some of which are listed right below: @@ -30,13 +30,13 @@ The `SoCSOEditor` module group contains several modules, some of which are liste Whenever Contour Segmentation Objects are created, they are temporarily stored by and can be managed with the `CSOManager`. {{}} -In this example, contours are created and colors and styles of these CSOs are customized by using the `SoCSOVisualizationSettings` module. +In this example, contours are created, and colors and styles of these CSOs are customized by using the `SoCSOVisualizationSettings` module. -![Screenshot](examples/data_objects/contours/example1/image.png) +![Visualization of a spline CSO is customized](examples/data_objects/contours/example1/image.png "Visualization of a spline CSO is customized") ## Summary -* Contours are stored as their own abstract data type called Contour Segmentation Objects (often abbreviated to *CSO*). -* The `SoCSO\*Editor` module group contains several useful modules to create, interact with or modify CSOs. +* Contours are stored as their own abstract data type called Contour Segmentation Objects (abbreviated to *CSO*). +* The `SoCSO\*Editor` module group contains several useful modules to create, interact with, or modify CSOs. * Created CSOs are temporarily stored and can be managed using the `CSOManager`. # Download diff --git a/mevislab.github.io/content/examples/data_objects/contours/example2/index.md b/mevislab.github.io/content/examples/data_objects/contours/example2/index.md index 180e1c675..279829a95 100644 --- a/mevislab.github.io/content/examples/data_objects/contours/example2/index.md +++ b/mevislab.github.io/content/examples/data_objects/contours/example2/index.md @@ -12,7 +12,7 @@ In this example, semiautomatic countours are created using the `SoCSOLiveWireEdi Additional contours between the manually created ones are generated by the `CSOSliceInterpolator` and added to the `CSOManager`. Different groups of contours are created for the left and right lobe of the lung and colored respectively. -![Screenshot](examples/data_objects/contours/example2/image.png) +![Manually created CSOs are automatically interpolated](examples/data_objects/contours/example2/image.png "Manually created CSOs are automatically interpolated") # Download You can download the example network [here](examples/data_objects/contours/example2/ContourExample2.mlab) diff --git a/mevislab.github.io/content/examples/data_objects/contours/example3/index.md b/mevislab.github.io/content/examples/data_objects/contours/example3/index.md index 030b6f8f9..b8db24f15 100644 --- a/mevislab.github.io/content/examples/data_objects/contours/example3/index.md +++ b/mevislab.github.io/content/examples/data_objects/contours/example3/index.md @@ -14,7 +14,7 @@ The module `VoxelizeCSO` is used to create a three-dimensional voxel mask of the Lastly, the panel of the `View3D` module is used to visualize the voxel mask in 3D. -![Screenshot](examples/data_objects/contours/example3/image.png) +![Manually created CSOs are automatically interpolated and shown in 2D and in 3D](examples/data_objects/contours/example3/image.png "Manually created CSOs are automatically interpolated and shown in 2D and in 3D") # Download You can download the example network [here](examples/data_objects/contours/example3/ContourExample3.mlab) diff --git a/mevislab.github.io/content/examples/data_objects/contours/example4/index.md b/mevislab.github.io/content/examples/data_objects/contours/example4/index.md index cea3e1e4b..134407c1f 100644 --- a/mevislab.github.io/content/examples/data_objects/contours/example4/index.md +++ b/mevislab.github.io/content/examples/data_objects/contours/example4/index.md @@ -10,7 +10,7 @@ This example shows how to add annotations to an image. ## Summary In this example, the network of **Contour Example 3** is extended, so that the volume of the 3D mask generated by the `VoxelizeCSO` module is calculated. The `CalculateVolume` module counts the number of voxels in the given mask and returns the correct volume in ml. The calculated volume will be used for a custom `SoView2DAnnotation` displayed in the `View2D`. -![Screenshot](examples/data_objects/contours/example4/image.png) +![Volume of a mask image is calculated and shown as an annoation](examples/data_objects/contours/example4/image.png "Volume of a mask image is calculated and shown as an annoation") # Download You can download the example network [here](examples/data_objects/contours/example4/ContourExample4.mlab) diff --git a/mevislab.github.io/content/examples/data_objects/contours/example5/index.md b/mevislab.github.io/content/examples/data_objects/contours/example5/index.md index 29bfd52c8..05af2b4b2 100644 --- a/mevislab.github.io/content/examples/data_objects/contours/example5/index.md +++ b/mevislab.github.io/content/examples/data_objects/contours/example5/index.md @@ -12,9 +12,9 @@ In this example, the `CSOIsoGenerator` is used to generate contours based on a g "Ghosting" means not only showing contours available on the currently visible slice but also contours on the neighboring slices with increasing transparency. -The contours are also displayed in a three-dimensionsl `SoExaminerViewer` by using the `SoCSO3DRenderer`. +The contours are also displayed in a three-dimensionsal `SoExaminerViewer` by using the `SoCSO3DRenderer`. -![Screenshot](examples/data_objects/contours/example5/image.png) +![CSOs on slices below and above the current slice are shown with ghosting](examples/data_objects/contours/example5/image.png "CSOs on slices below and above the current slice are shown with ghosting") # Download You can download the example network [here](examples/data_objects/contours/example5/ContourExample5.mlab) diff --git a/mevislab.github.io/content/examples/data_objects/curves/example1/index.md b/mevislab.github.io/content/examples/data_objects/curves/example1/index.md index fac8f47db..47236630e 100644 --- a/mevislab.github.io/content/examples/data_objects/curves/example1/index.md +++ b/mevislab.github.io/content/examples/data_objects/curves/example1/index.md @@ -6,7 +6,7 @@ category: "data_objects" # Curves Example: Drawing Curves This examples shows how to create and render curves. -![Screenshot](examples/data_objects/curves/example1/image.png) +![Multiple curves are rendered in a single 2D viewer](examples/data_objects/curves/example1/image.png "Multiple curves are rendered in a single 2D viewer") # Download You can download the example network [here](examples/data_objects/curves/example1/Curves.mlab) diff --git a/mevislab.github.io/content/examples/data_objects/markers/example1/index.md b/mevislab.github.io/content/examples/data_objects/markers/example1/index.md index 83ee2e3e9..da3e17ea8 100644 --- a/mevislab.github.io/content/examples/data_objects/markers/example1/index.md +++ b/mevislab.github.io/content/examples/data_objects/markers/example1/index.md @@ -6,7 +6,7 @@ category: "data_objects" # Marker Example 1: Distance Between Markers This examples shows how to create markers in a viewer and measure their distance. -![Screenshot](examples/data_objects/markers/example1/image.png) +![Distances of the red markers to the green marker are measured](examples/data_objects/markers/example1/image.png "Distances of the red markers to the green marker are measured") # Download You can download the example network [here](examples/data_objects/markers/example1/Marker_Example1.mlab) diff --git a/mevislab.github.io/content/examples/data_objects/surface_objects/example1/index.md b/mevislab.github.io/content/examples/data_objects/surface_objects/example1/index.md index 1bc93beba..04c10932c 100644 --- a/mevislab.github.io/content/examples/data_objects/surface_objects/example1/index.md +++ b/mevislab.github.io/content/examples/data_objects/surface_objects/example1/index.md @@ -6,7 +6,7 @@ category: "data_objects" # Surface Example 1: Creation of WEMs This example shows how to create WEMs out of voxel images and CSOs. -![Screenshot](examples/data_objects/surface_objects/example1/image.png) +![WEM surface is created from a voxel image](examples/data_objects/surface_objects/example1/image.png "WEM surface is created from a voxel image") # Download You can download the example network [here](examples/data_objects/surface_objects/example1/SurfaceExample1.mlab) diff --git a/mevislab.github.io/content/examples/data_objects/surface_objects/example2/index.md b/mevislab.github.io/content/examples/data_objects/surface_objects/example2/index.md index e6388808b..70da33219 100644 --- a/mevislab.github.io/content/examples/data_objects/surface_objects/example2/index.md +++ b/mevislab.github.io/content/examples/data_objects/surface_objects/example2/index.md @@ -6,7 +6,7 @@ category: "data_objects" # Surface Example 2: Processing and Modifying of WEMs This example shows how to process and modify WEMs using the modules `WEMModify`, `WEMSmooth`, and `WEMSurfaceDistance`. -![Screenshot](examples/data_objects/surface_objects/example2/DO7_03.png) +![Modified and smoothed WEM is compared to its original](examples/data_objects/surface_objects/example2/DO7_03.png "Modified and smoothed WEM is compared to its original") # Download You can download the example network [here](examples/data_objects/surface_objects/example2/SurfaceExample2.mlab) diff --git a/mevislab.github.io/content/examples/data_objects/surface_objects/example3/index.md b/mevislab.github.io/content/examples/data_objects/surface_objects/example3/index.md index f72a0e4b8..20829917a 100644 --- a/mevislab.github.io/content/examples/data_objects/surface_objects/example3/index.md +++ b/mevislab.github.io/content/examples/data_objects/surface_objects/example3/index.md @@ -1,6 +1,6 @@ --- layout: post -title: "Apply Transformations to a 3D WEM Object Via Mouse Interactions" +title: "Apply Transformations to a 3D WEM Object via Mouse Interactions" category: "data_objects" --- @@ -8,14 +8,14 @@ category: "data_objects" ## Scale, Rotate, and Move a WEM in a Scene In this example, we are using a `SoTransformerDragger` module to apply transformations on a 3D WEM object via mouse interactions. -![Screenshot](examples/data_objects/surface_objects/example3/image.png) +![Open Inventor cube is interactively transformed](examples/data_objects/surface_objects/example3/image.png "Open Inventor cube is interactively transformed") ### Download You can download the example network [here](examples/data_objects/surface_objects/example3/SurfaceExample3.mlab) ## Interactively Modify WEMs In this example, we are using a `SoWEMBulgeEditor` module to modify a WEM using the mouse. -![Screenshot](examples/data_objects/surface_objects/example3/image2.png) +![WEM surface is interactively deformed](examples/data_objects/surface_objects/example3/image2.png "WEM surface is interactively deformed") ### Download You can download the example network [here](examples/data_objects/surface_objects/example3/WEMExample3b.mlab) \ No newline at end of file diff --git a/mevislab.github.io/content/examples/data_objects/surface_objects/example4/index.md b/mevislab.github.io/content/examples/data_objects/surface_objects/example4/index.md index d169e8863..09cfef748 100644 --- a/mevislab.github.io/content/examples/data_objects/surface_objects/example4/index.md +++ b/mevislab.github.io/content/examples/data_objects/surface_objects/example4/index.md @@ -6,7 +6,7 @@ category: "data_objects" # Surface Example 4: Interactively Moving WEM This example shows how to use dragger modules to modify objects in a 3D viewer. -![Screenshot](examples/data_objects/surface_objects/example4/image.png) +![Spheroid WEM surface and a custom interactive dragger](examples/data_objects/surface_objects/example4/image.png "Spheroid WEM surface and a custom interactive dragger") # Download You can download the example network [here](examples/data_objects/surface_objects/example4/SurfaceExample4.zip) diff --git a/mevislab.github.io/content/examples/data_objects/surface_objects/example5/index.md b/mevislab.github.io/content/examples/data_objects/surface_objects/example5/index.md index 6484669c5..53663f103 100644 --- a/mevislab.github.io/content/examples/data_objects/surface_objects/example5/index.md +++ b/mevislab.github.io/content/examples/data_objects/surface_objects/example5/index.md @@ -6,7 +6,7 @@ category: "data_objects" # Surface Example 5: WEM - Primitive Value Lists This example shows how to use Primitive Value Lists (PVLs). With the help of PVLs, the distance between the surfaces of WEMs is color-coded. -![Screenshot](examples/data_objects/surface_objects/example5/image.png) +![Color-coding distances via PVLs](examples/data_objects/surface_objects/example5/image.png "Color-coding distances via PVLs") # Download You can download the example network [here](examples/data_objects/surface_objects/example5/SurfaceExample5.mlab) diff --git a/mevislab.github.io/content/examples/howto.md b/mevislab.github.io/content/examples/howto.md index 99a205019..8d489309f 100644 --- a/mevislab.github.io/content/examples/howto.md +++ b/mevislab.github.io/content/examples/howto.md @@ -20,7 +20,7 @@ The provided files are usually either *.mlab* files or *.zip* archives. You will MeVisLab files are networks stored as *.mlab* files.
{{}} -Double-clicking the left mouse button within your MeVisLab workspace works as a shortcut to open files. +Double-clicking {{< mousebutton "left" >}} the left mouse button within your MeVisLab workspace works as a shortcut to open files. {{}} Files can also be opened using the menu option {{< menuitem "File" "Open">}}. @@ -30,7 +30,7 @@ Archives mostly contain macro modules.
To use those macro modules, you will need to know how to handle user packages. {{}} -See [Example 2.1: Package creation](tutorials/basicmechanisms/macromodules/package/) for more information on packages in MeVisLab. +See [Example 2.1: Package Creation](tutorials/basicmechanisms/macromodules/package/) for more information on packages in MeVisLab. {{}} The contents can be extracted into the directory of your package. Make sure to keep the directory's structure for the examples to be loaded and displayed correctly. @@ -41,7 +41,7 @@ The typical directory structure of a MeVisLab package looks like this: The package *TutorialSummary* within the package group *MeVis* is shown above. A package typically contains at least a *Projects* directory, which is where the macro modules are located. When extracting the contents of a *.zip* file, the *Projects* folder of your package should be the target directory. Sometimes we even provide test cases. Extract them into the *TestCases* directory. -![Package directory structure](images/examples/howto_2.png "Package directory structure") +![Package directory content](images/examples/howto_2.png "Package directory content") {{}} Feel free to create certain directories if they do not exist yet, but make sure to name them conforming the directory structure shown above. diff --git a/mevislab.github.io/content/examples/image_processing/example1/index.md b/mevislab.github.io/content/examples/image_processing/example1/index.md index 947758a7c..1cb1f5aec 100644 --- a/mevislab.github.io/content/examples/image_processing/example1/index.md +++ b/mevislab.github.io/content/examples/image_processing/example1/index.md @@ -1,16 +1,16 @@ --- layout: post -title: "Arithmetic operations on two images" +title: "Arithmetic Operations on Two Images" category: "image_processing" --- # Image Processing Example 1: Arithmetic Operations on Two Images -In this example, we apply scalar functions on two images like Add, Multiply, Subtract, etc. +In this example, we apply scalar functions on two images like Add, Multiply, or Subtract. ## Summary We are loading two images by using the `LocalImage` module and show them in a `SynchroView2D`. In addition to that, both images are used for arithmetic processing in the module `Arithmetic2`. -![Screenshot](examples/image_processing/example1/image.png) +![Adding two voxel images](examples/image_processing/example1/image.png "Adding two voxel images") # Download You can download the example network [here](examples/image_processing/example1/BasicFilter.mlab) diff --git a/mevislab.github.io/content/examples/image_processing/example2/index.md b/mevislab.github.io/content/examples/image_processing/example2/index.md index 88c8e2330..9f548352a 100644 --- a/mevislab.github.io/content/examples/image_processing/example2/index.md +++ b/mevislab.github.io/content/examples/image_processing/example2/index.md @@ -10,7 +10,7 @@ In this example, we create a simple mask on an image, so that background voxels ## Summary We are loading images by using the `LocalImage` module and show them in a `SynchroView2D`. The same image is shown in the right viewer of the `SynchroView2D` but with a `Threshold`-based `Mask`. -![Screenshot](examples/image_processing/example2/image.png) +![Masking an image with a threshold-based mask image](examples/image_processing/example2/image.png "Masking an image with a threshold-based mask image") # Download You can download the example network [here](examples/image_processing/example2/ImageMask.mlab) diff --git a/mevislab.github.io/content/examples/image_processing/example3/index.md b/mevislab.github.io/content/examples/image_processing/example3/index.md index 27a928e10..21aa60599 100644 --- a/mevislab.github.io/content/examples/image_processing/example3/index.md +++ b/mevislab.github.io/content/examples/image_processing/example3/index.md @@ -10,7 +10,7 @@ In this example, we create a simple mask on an image by using the `RegionGrowing ## Summary We are loading images by using the `LocalImage` module and show them in a `SynchroView2D`. The same image is used as input for the `RegionGrowing` module. The starting point for the algorithm is a list of markers created by the `SoView2DMarkerEditor`. As the `RegionGrowing` may leave gaps, an additional `CloseGap` module is added. The resulting segmentation mask is shown as an overlay on the original image via `SoView2DOverlay`. -![Screenshot](examples/image_processing/example3/image.png) +![Segmenting with the region growing algorithm](examples/image_processing/example3/image.png "Segmenting with the region growing algorithm") # Download You can download the example network [here](examples/image_processing/example3/RegionGrowingExample.mlab) diff --git a/mevislab.github.io/content/examples/image_processing/example4/index.md b/mevislab.github.io/content/examples/image_processing/example4/index.md index b6686e3ea..f85ba0462 100644 --- a/mevislab.github.io/content/examples/image_processing/example4/index.md +++ b/mevislab.github.io/content/examples/image_processing/example4/index.md @@ -10,7 +10,7 @@ In this example, we subtract a sphere from another WEM. ## Summary We are loading images by using the `LocalImage` module and render them as a 3D scene in a `SoExaminerViewer`. We also add a sphere that is then subtracted from the original surface. -![Screenshot](examples/image_processing/example4/image.png) +![Subtracting a sphere from a surface](examples/image_processing/example4/image.png "Subtracting a sphere from a surface") # Download You can download the example network [here](examples/image_processing/example4/Subtract3DObjects.mlab) diff --git a/mevislab.github.io/content/examples/image_processing/example5/index.md b/mevislab.github.io/content/examples/image_processing/example5/index.md index 23db05a1e..0cb2ab44f 100644 --- a/mevislab.github.io/content/examples/image_processing/example5/index.md +++ b/mevislab.github.io/content/examples/image_processing/example5/index.md @@ -10,7 +10,7 @@ In this example, we are using the currently visible slice from a 2D view as a cl ## Summary We are loading images by using the `LocalImage` module and render them as a two-dimensional image stack `SoRenderArea`. The displayed slice is used to create a 3D plane/clip plane in a `SoExaminerViewer`. -![Screenshot](examples/image_processing/example5/image.png) +![Showing a slice in 2D, in 3D, and using it as a clip plane in 3D](examples/image_processing/example5/image.png "Showing a slice in 2D, in 3D, and using it as a clip plane in 3D") # Download You can download the example network [here](examples/image_processing/example4/ImageProcessingExample5.mlab) diff --git a/mevislab.github.io/content/examples/open_inventor/example1/index.md b/mevislab.github.io/content/examples/open_inventor/example1/index.md index 8d9a84bbf..964dd5236 100644 --- a/mevislab.github.io/content/examples/open_inventor/example1/index.md +++ b/mevislab.github.io/content/examples/open_inventor/example1/index.md @@ -1,6 +1,6 @@ --- layout: post -title: "Open Inventor objects" +title: "Open Inventor Objects" category: "open_inventor" --- @@ -14,7 +14,7 @@ Three 3D objects are created (`SoCone`, `SoSphere`, and `SoCube`) having a defin In the end, all three objects including their materials and transformations are added to the `SoExaminerViewer` by a `SoGroup`. -![Screenshot](examples/open_inventor/example1/image.png) +![Localizing material and transformation properties with SoSeparator](examples/open_inventor/example1/image.png "Localizing material and transformation properties with SoSeparator") # Download You can download the example network [here](examples/open_inventor/example1/OpenInventorExample1.mlab) diff --git a/mevislab.github.io/content/examples/open_inventor/example2/index.md b/mevislab.github.io/content/examples/open_inventor/example2/index.md index 4c81d6b60..d6733aa60 100644 --- a/mevislab.github.io/content/examples/open_inventor/example2/index.md +++ b/mevislab.github.io/content/examples/open_inventor/example2/index.md @@ -10,7 +10,7 @@ This example shows how to implement object interactions. ## Summary A `SoExaminerViewer` is used to render a `SoCube` object. The `SoMouseGrabber` is used to identify mouse interactions in the viewer and to resize the cube. -![Screenshot](examples/open_inventor/example2/image.png) +![Using a SoMouseGrabber to resize a SoCube](examples/open_inventor/example2/image.png "Using a SoMouseGrabber to resize a SoCube") # Download You can download the example network [here](examples/open_inventor/example2/OpenInventorExample2.mlab) diff --git a/mevislab.github.io/content/examples/open_inventor/example3/index.md b/mevislab.github.io/content/examples/open_inventor/example3/index.md index 5b7fab0f4..64efab8c7 100644 --- a/mevislab.github.io/content/examples/open_inventor/example3/index.md +++ b/mevislab.github.io/content/examples/open_inventor/example3/index.md @@ -10,7 +10,7 @@ This example shows different options for using a camera and different viewers in ## Summary We will show the difference between a `SoRenderArea` and a `SoExaminerViewer` and use different modules of the `SoCamera*` group. -![Screenshot](examples/open_inventor/example3/image.png) +![Network with different cameras and viewers](examples/open_inventor/example3/image.png "Network with different cameras and viewers") # Download You can download the example network [here](examples/open_inventor/example3/CameraInteractions.mlab) diff --git a/mevislab.github.io/content/examples/open_inventor/example4/index.md b/mevislab.github.io/content/examples/open_inventor/example4/index.md index 82ac6f097..9a2167d9c 100644 --- a/mevislab.github.io/content/examples/open_inventor/example4/index.md +++ b/mevislab.github.io/content/examples/open_inventor/example4/index.md @@ -12,7 +12,7 @@ This example has been taken from the [MeVisLab forum](https://forum.mevislab.de/ ## Summary A local macro `flightControl` allows you to navigate with the camera through the scene. -![Screenshot](examples/open_inventor/example4/image.png) +![Flying through a spline](examples/open_inventor/example4/image.png "Flying through a spline") # Download You can download the example network [here](examples/open_inventor/example4/flight2.zip) diff --git a/mevislab.github.io/content/examples/thirdparty/pytorch1/index.md b/mevislab.github.io/content/examples/thirdparty/pytorch1/index.md index 51998f30e..3e82d8436 100644 --- a/mevislab.github.io/content/examples/thirdparty/pytorch1/index.md +++ b/mevislab.github.io/content/examples/thirdparty/pytorch1/index.md @@ -7,7 +7,7 @@ category: "thirdparty" # Third-party Example 5: Segmentation in Webcam Stream by using PyTorch This macro module segments a person shown in a webcam stream by using a pretrained network from PyTorch (torchvision). -![Screenshot](images/tutorials/thirdparty/pytorch_example3_10.png) +![Segmenting persons with PyTorch](images/tutorials/thirdparty/pytorch_example3_10.png "Segmenting persons with PyTorch") # Download You can download the Python files [here](examples/thirdparty/pytorch1/PyTorchSegmentationExample.zip) diff --git a/mevislab.github.io/content/examples/visualization/example1/index.md b/mevislab.github.io/content/examples/visualization/example1/index.md index 4cdb94be3..14b217c54 100644 --- a/mevislab.github.io/content/examples/visualization/example1/index.md +++ b/mevislab.github.io/content/examples/visualization/example1/index.md @@ -7,7 +7,7 @@ category: "visualization" # Visualization Example 1: Synchronous View of Two Images This simple example shows how to load an image and apply a basic `Convolution` filter to the image. The image with and without filter is shown in a viewer and scrolling is synchronized, so that the same slice is shown for both images. -![Screenshot](examples/visualization/example1/image.png) +![Comparing slices](examples/visualization/example1/image.png "Comparing slices") # Download You can download the example network [here](examples/visualization/example1/VisualizationExample1.mlab) diff --git a/mevislab.github.io/content/examples/visualization/example2/index.md b/mevislab.github.io/content/examples/visualization/example2/index.md index bd293c06d..ef270e061 100644 --- a/mevislab.github.io/content/examples/visualization/example2/index.md +++ b/mevislab.github.io/content/examples/visualization/example2/index.md @@ -6,7 +6,7 @@ category: "visualization" # Visualization Example 2: Creating a Magnifier This example shows how to create a magnifier. Using the module `SubImage`, a fraction of the original image can be extracted and enlarged. -![Screenshot](examples/visualization/example2/image.png) +![Creating a magnifier with modules](examples/visualization/example2/image.png "Creating a magnifier with modules") # Download You can download the example network [here](examples/visualization/example2/VisualizationExample2.mlab) diff --git a/mevislab.github.io/content/examples/visualization/example3/index.md b/mevislab.github.io/content/examples/visualization/example3/index.md index fb717ee4e..44f90e025 100644 --- a/mevislab.github.io/content/examples/visualization/example3/index.md +++ b/mevislab.github.io/content/examples/visualization/example3/index.md @@ -6,7 +6,7 @@ category: "visualization" # Visualization Example 3: Image Overlays This example shows the creation of an overlay. Using the module `SoView2DOverlay`, an overlay can be blended over a 2D image. -![Screenshot](examples/visualization/example3/image.png) +![Showing a mask as an overlay](examples/visualization/example3/image.png "Showing a mask as an overlay") # Download You can download the example network [here](examples/visualization/example3/VisualizationExample3.mlab) diff --git a/mevislab.github.io/content/examples/visualization/example4/index.md b/mevislab.github.io/content/examples/visualization/example4/index.md index 943d4148b..aa3ae9ce0 100644 --- a/mevislab.github.io/content/examples/visualization/example4/index.md +++ b/mevislab.github.io/content/examples/visualization/example4/index.md @@ -6,7 +6,7 @@ category: "visualization" # Visualization Example 4: Display Images Converted to Open Inventor Scene Objects This example shows how to convert a slice or slab of voxel images to 2D renderings on the screen using the module `SoView2D` and modules based on SoView2DExtension. -![Screenshot](examples/visualization/example4/image.png) +![Showing a mask as an overlay](examples/visualization/example4/image.png "Showing a mask as an overlay") # Download You can download the example network [here](examples/visualization/example4/VisualizationExample4.mlab) diff --git a/mevislab.github.io/content/examples/visualization/example5/index.md b/mevislab.github.io/content/examples/visualization/example5/index.md index c2418db88..d2cbe8da2 100644 --- a/mevislab.github.io/content/examples/visualization/example5/index.md +++ b/mevislab.github.io/content/examples/visualization/example5/index.md @@ -6,7 +6,7 @@ category: "visualization" # Visualization Example 5: Volume Rendering and Interactions This example shows the volume rendering of a scan. The texture of the volume is edited and animations are implemented. -![Screenshot](examples/visualization/example5/image.png) +![Automatically rotating a 3D volume rendering](examples/visualization/example5/image.png "Automatically rotating a 3D volume rendering") # Download You can download the example network [here](examples/visualization/example5/VisualizationExample5.mlab) diff --git a/mevislab.github.io/content/examples/visualization/example6/index.md b/mevislab.github.io/content/examples/visualization/example6/index.md index 2031b2b8a..06b4c714c 100644 --- a/mevislab.github.io/content/examples/visualization/example6/index.md +++ b/mevislab.github.io/content/examples/visualization/example6/index.md @@ -6,7 +6,7 @@ category: "visualization" # Visualization Example 6.1: Volume Rendering vs. Path Tracing This example shows a comparison between Volume Rendering and Path Tracing. The same scene is rendered and the camera interactions in both viewers are synchronized. -![Screenshot](examples/visualization/example6/image.png) +![Volume rendering vs. path tracing](examples/visualization/example6/image.png "Volume rendering vs. path tracing") # Download You can download the example network [here](examples/visualization/example6/pathtracer1.mlab) diff --git a/mevislab.github.io/content/introduction/introduction.md b/mevislab.github.io/content/introduction/introduction.md index 4f9926906..f2fe8de3a 100644 --- a/mevislab.github.io/content/introduction/introduction.md +++ b/mevislab.github.io/content/introduction/introduction.md @@ -24,7 +24,7 @@ analysis, surgery planning, and cardiovascular analysis. MeVisLab is a development environment for rapid prototyping and product development of medical and industrial imaging applications. It includes -a [*Software Development Kit (SDK)*](glossary/#mevislab-sdk) and an [*ApplicationBuilder*](glossary/#mevislab-apk) for deploying your applications to end-customers. +a [*Software Development Kit (SDK)*](glossary/#mevislab-sdk) and an [*ApplicationBuilder*](glossary/#mevislab-apk) for deploying your applications to end customers. In turn, the *MeVisLab SDK* consists of an [*Integrated Development Environment (IDE)*](glossary/#mevislab-ide) for visual programming and the advanced text editor [*MATE*](glossary/#mevislab-mate) for Python @@ -50,9 +50,9 @@ You find them at the end of the tutorial or, also sorted by chapters, under the The examples under the designated menu entry are more suitable if you already have a little experience and rather search for inspiration than for explanations. ### Starting MeVisLab for the First Time -Right after installation of MeVisLab, you will find some new icons on your Desktop (if selected during setup). +Right after installing MeVisLab, you will find some new icons on your desktop (if selected during setup). -![MeVisLab Desktop Icons](images/tutorials/basicmechanics/WindowsIcons.png "MeVisLab Desktop Icons (Windows)") +![MeVisLab desktop icons (Windows)](images/tutorials/basicmechanics/WindowsIcons.png "MeVisLab desktop icons (Windows)") Use the top middle icon to start the MeVisLab IDE. You can also start the integrated text editor MATE or the ToolRunner. For this tutorial, you will generally require the IDE. @@ -63,7 +63,7 @@ Maybe postpone the usage of the *QuickStart* icons as they can cause created pac ### MeVisLab IDE User Interface {#tutorial_ide} First, start the MeVisLab IDE. After showing a Welcome Screen, the standard user interface opens. -![MeVisLab IDE User Interface](images/tutorials/introduction/IDE1.png "MeVisLab IDE User Interface") +![MeVisLab IDE user interface](images/tutorials/introduction/IDE1.png "MeVisLab IDE user interface") #### Workspace By default, MeVisLab starts with an empty [workspace](glossary/#workspace). @@ -71,7 +71,7 @@ By default, MeVisLab starts with an empty [workspace](glossary/#workspace). This is where you will develop and edit networks. Essentially, networks form the base of all processing and visualization pipelines, so the workspace is where the visual programming is done. #### Views Area -The standard [Views Area](glossary/#views-area) contains the [Output Inspector and Module Inspector](./tutorials/basicmechanisms#The_Output_Inspector_and_the_Module_Inspector "Output Inspector and Module Inspector"). With the help of the Output Inspector, you can visualize the modules output. +The standard [Views Area](glossary/#views-area) contains the [Output Inspector and Module Inspector](./tutorials/basicmechanisms#The_Output_Inspector_and_the_Module_Inspector "Output Inspector and Module Inspector"). With the help of the Output Inspector, you can visualize the output of the modules. {{}} Further information on each module, e.g., about [module parameters](glossary/#field), can be found using the [Module Inspector](glossary/#module-inspector). @@ -87,13 +87,13 @@ rearrange the items and add new views via {{< menuitem "Main Menu" "View" "Views {{< bootstrap-table table_class="table table-striped" >}} |
Extension
| Description | | --- | --- | -| `.mlab` | Network file, includes all information about the networks modules, their settings, their connections, and module groups. Networks developed using the `MeVisLab SDK` are stored as *.mlab* files and can only be opened having a valid SDK license. | -| `.def` | Module definition file, necessary for a module to be added to the common MeVisLab module database. May also include all MDL script parts (if they are not sourced out to the *.script* file). | +| `.mlab` | Network file: includes all information about the networks modules, their settings, their connections, and module groups. Networks developed using the `MeVisLab SDK` are stored as *.mlab* files and can only be opened having a valid SDK license. | +| `.def` | Module definition file: necessary for a module to be added to the common MeVisLab module database. May also include all MDL script parts (if they are not sourced out to the *.script* file). | | `.script` | `MDL` script file, typically includes the user interface definition of panels. See [Chapter GUI Development](./tutorials/basicmechanisms/macromodules/guidesign#Example_Paneldesign "GUI Development") for an example on GUI programming. | -| `.mlimage` | MeVisLab internal image format for 6D images saved with all DICOM tags, lossless compression, and in all data types. | -| `.mhelp` | File with descriptions of all fields and possible use cases of a module, edit- and creatable by using `MATE`. See [Help files](./tutorials/basicmechanisms/macromodules/helpfiles "Help files") for details. | -| `.py` | Python file, used for scripting in macro modules. See [Python scripting](./tutorials/basicmechanisms/macromodules/pythonscripting#TutorialPythonScripting "Python scripting") for an example on macro programming. | -| `.dcm` | DCM part of the imported DICOM file, see [Importing DICOM Data](./tutorials/basicmechanisms/dataimport#DICOMImport "Importing DICOM Data"). | +| `.mlimage` | MeVisLab internal image format for 6D images saved with all DICOM tags: lossless compression, and in all data types. | +| `.mhelp` | File with descriptions of all fields and possible use cases of a module: edit- and creatable by using `MATE`. See [Help files](./tutorials/basicmechanisms/macromodules/helpfiles "Help files") for details. | +| `.py` | Python file: used for scripting in macro modules. See [Python scripting](./tutorials/basicmechanisms/macromodules/pythonscripting#TutorialPythonScripting "Python scripting") for an example on macro programming. | +| `.dcm` | DCM part of the imported DICOM file; see [Importing DICOM Data](./tutorials/basicmechanisms/dataimport#DICOMImport "Importing DICOM Data"). | {{< /bootstrap-table >}} ### Module Types {#Module_Types} @@ -119,11 +119,10 @@ If a module is invalid, it is displayed in bright red. This might happen if the |
Appearance
| Explanation | | --- | --- | | ![Invalid module](images/tutorials/introduction/MLMModuleStateInvalid.png "Invalid module") | Invalid module | - ![Macro State Invalid](images/tutorials/introduction/MLMModuleStateMacroInvalidModule.png "Macro State Invalid") | Macro containing an invalid module | + ![Macro state invalid](images/tutorials/introduction/MLMModuleStateMacroInvalidModule.png "Macro state invalid") | Macro containing an invalid module | {{< /bootstrap-table >}} -As you can see, the number of warning and error messages that are being printed to the -debug console are listed in the upper right corner of the module. This is intentional, as it enables the developer to quickly find the module causing the errors. +As you can see, the number of warning and error messages printed to the debug console is displayed in the upper right corner of the module. This is intentional, as it enables the developer to quickly find the module causing the errors. {{}} Once the debug console is cleared, the warning and error indicators next to the @@ -135,7 +134,7 @@ Informational messages are indicated in a similar manner on the same spot, but i ### Module Interactions Through the Context Menu Each module has a context menu, providing the following options: -![Context Menu of a module](images/tutorials/introduction/ModuleContextMenu.png "Context Menu of a module") +![Context menu of a module](images/tutorials/introduction/ModuleContextMenu.png "Context menu of a module") * **Show Internal Network:** [Macro modules](glossary/#macro-module) provide an entry to open the internal network. You can see what happens inside a macro module. The internal network may also contain other macro modules. * **Show Window:** If a module does not provide a user interface, you will see the automatic panel showing the module's name. Modules may additionally have one or more windows that can be opened. You can also open the Scripting Console of a module to integrate Python. @@ -155,9 +154,9 @@ Once again, three types can be distinguished: {{< bootstrap-table table_class="table table-striped" >}} |
Appearance
|
Shape
| Definition | | --- | --- | --- | -| ![Triangle](images/tutorials/introduction/MLMConnectorTriangle.png "Triangle - ML Image") | triangle | ML images | -| ![Circle](images/tutorials/introduction/MLMConnectorHalfCircle.png "Circle - Inventor Scene") | half-circle | Inventor scene | -| ![Square](images/tutorials/introduction/MLMConnectorSquare.png "Square - Base Object") | square | Base objects: Pointers to data structures | +| ![Triangle - ML image](images/tutorials/introduction/MLMConnectorTriangle.png "Triangle - ML image") | triangle | ML image | +| ![Circle - Open Inventor scene](images/tutorials/introduction/MLMConnectorHalfCircle.png "Circle - Open Inventor scene") | half-circle | Open Inventor scene | +| ![Square - Base object](images/tutorials/introduction/MLMConnectorSquare.png "Square - Base object") | square | Base objects: Pointers to data structures | {{< /bootstrap-table >}} {{}} @@ -194,7 +193,7 @@ Both the menu entry{{< menuitem "Modules" >}} and the Module Browser display all Therefore, both places are a good starting point when in need of a specific function, like an `ImageLoad` module. -![Modules Menu and Module Browser](images/tutorials/introduction/GSExampleNetworkViewImage01c.png "Modules Menu and Module Browser") +![Modules menu and module browser](images/tutorials/introduction/GSExampleNetworkViewImage01c.png "Modules menu and module browser") The advantage of the Module Browser is that you can right-click {{< mousebutton "right" >}} the entries, open the context menu and, for example, open the help (in your @@ -206,9 +205,9 @@ For a module to be listed, it has to be available in the [SDK](glossary/#mevisla [packages](glossary/#package). A detailed tutorial on how to create packages can be found [here](tutorials/basicmechanisms/macromodules/package/). If in doubt or missing something, check out the loaded packages in the preferences. {{}} -Usually the quickest way to add modules to a network is the quick search in the menu bar. It offers the possibility to search for modules by module name. By default, the search will also be extended to keywords and substrings and is case-insensitive. To change these settings, click the magnifier button for the search options. +Usually the quickest way to add modules to a network is the quick search in the menu bar. It offers the possibility to search for modules by module name. By default, the search will also be extended to keywords and substrings and is case-insensitive. To change these settings, click {{< mousebutton "left" >}} the magnifier button for the search options. -![Quick Search Options](images/tutorials/introduction/MLMQuickSearch.png "Quick Search Options") +![Quick search options](images/tutorials/introduction/MLMQuickSearch.png "Quick search options") {{}} Any time you enter something in the MeVisLab GUI while not focussing a dialog window, your entry will be put into the quick search automatically. @@ -216,7 +215,7 @@ Any time you enter something in the MeVisLab GUI while not focussing a dialog wi Use the {{< keyboard "ArrowUp" >}} and {{< keyboard "ArrowDown" >}} keys on your keyboard to move to one of the listed modules. The module's decription will appear next to it, allowing you to decide if this is the right module for your use case. -![Quick Search Results](images/tutorials/introduction/GSExampleNetworkViewImage02.png "Quick Search Results") +![Quick search results](images/tutorials/introduction/GSExampleNetworkViewImage02.png "Quick search results") {{}} For a more complex search, use the Module Search View. diff --git a/mevislab.github.io/content/tutorials/basicmechanisms.md b/mevislab.github.io/content/tutorials/basicmechanisms.md index 63ecf6740..9c575cf0d 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms.md @@ -14,7 +14,7 @@ menu: --- ## Basic Mechanisms of MeVisLab (Example: Building a Contour Filter) {#TutorialBasicMechanics} -In this chapter you will learn the basic mechanisms of the MeVisLab IDE. You will learn how to reuse existing modules to load and view data, and you will build your first processing pipeline. +In this chapter, you will learn the basic mechanisms of the MeVisLab IDE. You will learn how to reuse existing modules to load and view data, and you will build your first processing pipeline. {{< youtube "hRspMChITE4">}} @@ -27,11 +27,11 @@ Additional information on the basics of MeVisLab are explained {{< docuLinks "/R ### Loading Data {#TutorialLoadingData} First, we need to load the data we would like to work on, e.g., a CT scan. In MeVisLab, modules are used to perform their associated specific task: they are the basic entities you will be working with. Each module has a different functionality for processing, visualization, and interaction. Connecting modules enables the development of complex processing pipelines. You will get to know different types of modules throughout the course of this tutorial. -Starting off, we will add the module `ImageLoad` to our network to load our data. The module can be found by typing its name into the search bar on the top-right corner and is added to your network by clicking it {{< mousebutton "left" >}}. +Starting off, we will add the module `ImageLoad` to our network to load our data. The module can be found by typing its name into the search bar on the top-right corner and is added to your network by clicking it {{< mousebutton "left" >}}. ![Search for ImageLoad](images/tutorials/basicmechanics/BM_01.png "Search for ImageLoad") -Next, we select and load the data we'd like to process. Double-click {{< mousebutton "left" >}} the module `ImageLoad` to open its panel. You can browse through your folders to select the data you'd like to open. Example data can be found in the MeVisLab DemoData directory *$(InstallDir)/Packages/MeVisLab/Resources/DemoData* located in the MeVisLab installation path. Select a file, for example, an MRI scan of a shoulder *Shoulder_Fracture.tif*. The image is loaded immediately and basic information of the loaded image can be seen in the Panel. +Next, we select and load the data we'd like to process. Double-click {{< mousebutton "left" >}} the module `ImageLoad` to open its panel. You can browse through your folders to select the data you'd like to open. Example data can be found in the MeVisLab DemoData directory *$(InstallDir)/Packages/MeVisLab/Resources/DemoData* located in the MeVisLab installation path. Select a file, for example, an MRI scan of a shoulder *Shoulder_Fracture.tif*. The image is loaded immediately and basic information of the loaded image can be seen in the panel. {{}} There also are modules to load multiple other formats of data. These are the most common ones: @@ -66,16 +66,16 @@ You are not restricted to 2D. The Output Inspector offers a 3D View of most load * F = feet {{}} -Below the Output Inspector, you'll find the Module Inspector. The Module Inspector displays properties and parameters of the selected module. Parameters are stored in so called **Fields**. Using the Module Inspector, you can examine different fields of your `ImageLoad` module. The module has, for example, the fields filename (the path the loaded image is stored in), as well as sizeX, sizeY, and sizeZ (the extent of the loaded image). +Below the Output Inspector, you'll find the Module Inspector. The Module Inspector displays properties and parameters of the selected module. Parameters are stored in so-called **Fields**. Using the Module Inspector, you can examine different fields of your `ImageLoad` module. The module has, for example, the fields filename (the path the loaded image is stored in), as well as sizeX, sizeY, and sizeZ (the extent of the loaded image). ![Module Inspector](images/tutorials/basicmechanics/BM_04.png "Module Inspector") ### Viewer {#TutorialViewer} -Instead of using the Output Inspector to inspect images, we'd suggest to add another viewer to the network. Search for the module `View2D` and add it to your workspace. Most modules have different connector options. Data is generally transmitted from the top side of a module to another modules bottom side. +Instead of using the Output Inspector to inspect images, we'd suggest to add another viewer to the network. Search for the module `View2D` and add it to your workspace. Most modules have different connector options. Data is generally transmitted from the top side of a module to another module's bottom side. The module `View2D` has one input connector for voxel images (triangle-shaped) and three other possible input connectors (shaped like half-circles) on the bottom. The half-circle-shaped input connectors will be explained later on. Generally, module outputs can be connected to module inputs with the same symbol and thus transmit information and data between those modules. -![2D Viewer](images/tutorials/basicmechanics/BM_05.png "2D Viewer") +![2D viewer](images/tutorials/basicmechanics/BM_05.png "2D viewer") You can now display the loaded image in the newly added viewer module by connecting the output of the module `ImageLoad` to the input connector of the module `View2D`. Follow these steps to do so: @@ -91,26 +91,26 @@ You can now display the loaded image in the newly added viewer module by connect Although the connection is established, no image rendering has started yet. To initialize rendering, open the `View2D` panel by double-clicking {{< mousebutton "left" >}} on the module. Similar to the Output Inspector, you can scroll through the slices and set different levels of contrast. The amount of displayed annotations is altered by pressing {{< keyboard "A" >}} on the keyboard (annotation-mode). -![View2D Panel](images/tutorials/basicmechanics/BM_07.png "View2D Panel") +![View2D panel](images/tutorials/basicmechanics/BM_07.png "View2D panel") By dragging the connection away from either the input or the output connector, the connection is interrupted. Connections between compatible outputs and inputs are established automatically if two modules get close enough to each other. {{}} -Connecting, Disconnecting, Moving, and Replacing Connections is explained in more detail {{< docuLinks "/Resources/Documentation/Publish/SDK/MeVisLabManual/ch03s04.html" "here" >}} +Connecting, disconnecting, moving, and replacing connections is explained in more detail {{< docuLinks "/Resources/Documentation/Publish/SDK/MeVisLabManual/ch03s04.html" "here" >}} {{}} [//]: <> (MVL-653) ### Image Processing {#TutorialImageProcessing} -An average kernel will be used to smooth the image as our next step will be to actually process our image. Add the `Convolution` module to your workspace and disconnect the `View2D` module from the `ImageLoad` module by clicking {{< mousebutton "left" >}} on the connection and pressing {{< keyboard "DEL" >}}. Now, you can build new connections from the module `ImageLoad` to the module `Convolution` and the `Convolution` module to `View2D`. +An average kernel will be used to smooth the image as our next step will be to actually process our image. Add the `Convolution` module to your workspace and disconnect the `View2D` module from the `ImageLoad` module by clicking {{< mousebutton "left" >}} on the connection and pressing {{< keyboard "DEL" >}}. Now, you can establish new connections from the module `ImageLoad` to the module `Convolution` and the `Convolution` module to `View2D`. -![Convolution Module](images/tutorials/basicmechanics/BM_08.png "Convolution Module") +![Convolution module](images/tutorials/basicmechanics/BM_08.png "Convolution module") Open the panel of the `Convolution` module by double-clicking {{< mousebutton "left" >}} it. The panel allows configuration of the module. You can adjust parameters or select a kernel. We will be using the *3x3 Average Kernel* for now. -![Select a Kernel](images/tutorials/basicmechanics/BM_09.png "Select a Kernel") +![Select a kernel](images/tutorials/basicmechanics/BM_09.png "Select a kernel") The module `View2D` is now displaying the smoothed image. @@ -118,7 +118,7 @@ To compare the processed and unprocessed image, click {{< mousebutton "left" >}} You can also inspect changes between processed (output connector) and unprocessed (input connector) images by adding a second or even third viewer to your network. "Layers" of applied changes can be inspected next to each other using more than one viewer and placing as well as connecting them accordingly. We will be using a second `View2D` module. Notice how the second viewer is numbered for you to be able to distinguish them better. It might be important to know at this point that numerous connections can be established from one output connector but an input connector can only receive one stream of data. Connect the module `ImageLoad` to the second viewer to display the images twice. You can now scroll through the slices of both viewers and inspect the images. -![Multiple Viewers](images/tutorials/basicmechanics/BM_10.png "Multiple Viewers") +![Multiple viewers](images/tutorials/basicmechanics/BM_10.png "Multiple viewers") ### Parameter Connection for Synchronization {#TutorialParameterConnection} You're now able to scroll through the slices of the image in two separate windows. To examine the effect of the filter even better, we will now synchronize both viewers. @@ -127,12 +127,12 @@ We already know data connections between module inputs and outputs. Besides modu In order to practice establishing parameter connections, add the `SyncFloat` module to your workspace. -![SyncFloat Module](images/tutorials/basicmechanics/BM_11.png "SyncFloat Module") +![SyncFloat module](images/tutorials/basicmechanics/BM_11.png "SyncFloat module") We will be synchronizing the startSlice fields of our viewers to be able to directly compare the effect our processing module has on the slices: Right-click {{< mousebutton "right" >}} the viewer `View2D` to open its context menu and select {{< menuitem "Show Window" "Automatic Panel" >}}. -![Automatic Panel View2D](images/tutorials/basicmechanics/BM_12.png "Automatic Panel View2D") +![Automatic panel View2D](images/tutorials/basicmechanics/BM_12.png "Automatic panel View2D") Doing so shows all parameter fields of the module `View2D`. @@ -142,40 +142,40 @@ Now, double-click {{< mousebutton "left" >}} the module `SyncFloat` to open its Click {{< mousebutton "left" >}} on the label startSlice in the automatic panel of the module `View2D`, keep the button pressed, and drag the connection to the label Float1 in the panel of the module `SyncFloat`. -![Synchronize StartSlice](images/tutorials/basicmechanics/BM_13.png "Synchronize StartSlice") +![Synchronize startSlice](images/tutorials/basicmechanics/BM_13.png "Synchronize startSlice") -The connection is drawn as a thin gray arrow between both modules with the arrowhead pointing to the module that receives the field value as input. The value of the field startSlice is now transmitted to the field Float1. Changing startSlice automatically changes Float1, but not the other way round. +The connection is rendered as a thin gray arrow between both modules with the arrowhead pointing to the module that receives the field value as input. The value of the field startSlice is now transmitted to the field Float1. Changing startSlice automatically changes Float1, but not the other way round. -![Parameter Connection StartSlice](images/tutorials/basicmechanics/BM_14.png "Parameter Connection StartSlice") +![Parameter connection startSlice](images/tutorials/basicmechanics/BM_14.png "Parameter connection startSlice") -We will now establish a connection from the module `SyncFloat` to the second viewer, `Viewer2D1`. In order to do that, open the automatic panel `View2D1`. Draw a connection from the label Float2 of the panel of the module `SyncFloat` to the label startSlice in the automatic panel of the module `View2D1`. Lastly, implement a connection between the parameter fields startSlice of both viewers. Draw the connection from `View2D1` to `View2D`. +We will now establish a connection from the module `SyncFloat` to the second viewer, `Viewer2D1`. In order to do that, open the automatic panel `View2D1`. Establish a connection from the label Float2 of the panel of the module `SyncFloat` to the label startSlice in the automatic panel of the module `View2D1`. Lastly, establish a connection between the parameter fields startSlice of both viewers. Establish the connection from `View2D1` to `View2D`. ![Synchronize both directions](images/tutorials/basicmechanics/BM_15.png "Synchronize both directions") As a result, scrolling through the slices with the mouse wheel {{< mousebutton "middle" >}} in one of the viewers synchronizes the rendered slice in the second viewer. In this case, you can inspect the differences between smoothed and unsmoothed data on every single slice. -![Your final Network](images/tutorials/basicmechanics/BM_16.png "Your final Network") +![Your final network](images/tutorials/basicmechanics/BM_16.png "Your final network") It is also possible to use the predefined module `SynchroView2D` to accomplish a similar result.(`SynchroView2D`'s usage is described in more detail in [this chapter](tutorials/visualization/visualizationexample1/) ). ### Grouping Modules {#TutorialGroupingModules} -A contour filter can be created based on our previously created network. To finalize the filter, add the modules `Arithmetic2` and `Morphology` to your workspace and connect the modules as shown below. Double-click {{< mousebutton "left" >}} the module `Arithmetic2` to open its panel. Change the field Function of the module `Arithmetic2` to use the function subtract in the panel of the module. The contour filter is done now. You can inspect each processing step using the Output Inspector by clicking {{< mousebutton "left" >}} on the input and output connectors of the respective modules. The final results can be displayed using the viewer modules. If necessary, adjust the contrast by pressing the right mouse button and moving the cursor. +A contour filter can be created based on our previously created network. To finalize the filter, add the modules `Arithmetic2` and `Morphology` to your workspace and connect the modules as shown below. Double-click {{< mousebutton "left" >}} the module `Arithmetic2` to open its panel. Change the field Function of the module `Arithmetic2` to use the function *subtract* in the panel of the module. The contour filter is done now. You can inspect each processing step using the Output Inspector by clicking {{< mousebutton "left" >}} on the input and output connectors of the respective modules. The final results can be displayed using the viewer modules. If necessary, adjust the contrast by pressing the right mouse button and moving the cursor. ![Grouping modules](images/tutorials/basicmechanics/BM_17.png "Grouping modules") If you'd like to know more about specific modules, search for help. You can do this by right-clicking {{< mousebutton "right" >}} the module and select {{< menuitem "Help" >}}, which offers an example network and further information about the selected module in particular. -![Module Help](images/tutorials/basicmechanics/BM_18.png "Module Help") +![Module help](images/tutorials/basicmechanics/BM_18.png "Module help") -To be able to better distinguish the image processing pipeline, you can encapsulate it in a group: select the three modules, for example, by dragging a selection rectangle around them. Then, right- {{< mousebutton "right" >}} the selection to open the context menu and select {{< menuitem "Add to New Group" >}}. +To be able to better distinguish the image processing pipeline, you can encapsulate it in a group: select the three modules, for example, by dragging a selection rectangle around them. Then, right-click {{< mousebutton "right" >}} the selection to open the context menu and select {{< menuitem "Add to New Group" >}}. -![Add modules to new group](images/tutorials/basicmechanics/BM_19.png "Add to new group") +![Add modules to new group](images/tutorials/basicmechanics/BM_19.png "Add modules to new group") Enter a name for the new group, for example, *Filter*. The new group is created and displayed as a green rectangle. The group allows for quick interactions with all its modules. -![Your Filter Group](images/tutorials/basicmechanics/BM_20.png "Your Filter Group") +![Your filter group](images/tutorials/basicmechanics/BM_20.png "Your filter group") -Your network got very complex and you lost track? No problem. Let MeVisLab arrange your modules automatically via {{< menuitem "Mein Menu" "Edit" "Auto Arrange Selection" >}} (or via keyboard shortcut {{< keyboard "CTRL" "1" >}}). +Your network got very complex and you lost track? No problem. Let MeVisLab arrange your modules automatically via {{< menuitem "Main Menu" "Edit" "Auto Arrange Selection" >}} (or via keyboard shortcut {{< keyboard "Ctrl" "1" >}}). Now, it is time to save your first network. Open the tab {{< menuitem "File" "Save" >}} to save the network in an *.mlab* file. @@ -197,9 +197,9 @@ To condense our filter into one single module, we will now be creating a macro m ![Convert to local macro](images/tutorials/basicmechanics/BM_21.png "Convert to local macro") ![Your first local macro](images/tutorials/basicmechanics/BM_22.png "Your first local macro") -Right-click {{< mousebutton "right" >}} the macro module and select {{< menuitem "Show Internal Network" >}} to inspect and change the internal network. You can change the properties of the new macro module by changing the properties in the internal network. You can, for example, click {{< mousebutton "left" >}} the module `Convolution` and change the kernel. +Right-click {{< mousebutton "right" >}} the macro module and select {{< menuitem "Show Internal Network" >}} to inspect and change the internal network. You can change the properties of the new macro module by changing the properties in the internal network. You can, for example, double-click {{< mousebutton "left" >}} the module `Convolution` and change the kernel. -![Internal Network of your local macro](images/tutorials/basicmechanics/BM_23.png "Internal Network of your local macro") +![Internal network of your local macro](images/tutorials/basicmechanics/BM_23.png "Internal network of your local macro") {{< youtube "VmK6qx-vKWk">}} @@ -211,7 +211,7 @@ More information on macro modules can be found {{< docuLinks "/Resources/Documen [//]: <> (MVL-651) ## Summary -* MeVisLab provides predefined modules you can reuse and connect for building more or less complex networks. +* MeVisLab provides predefined modules that you can reuse and connect for building more or less complex networks. * Each module's output can be previewed using the Output Inspector. * Each module provides example networks to explain their usage. * Parameters of each module can be changed in the Module Inspector or automatic panel of the module. diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/coordinatesystems/coordinatesystems.md b/mevislab.github.io/content/tutorials/basicmechanisms/coordinatesystems/coordinatesystems.md index 775d50db5..56c70bfe9 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/coordinatesystems/coordinatesystems.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/coordinatesystems/coordinatesystems.md @@ -14,7 +14,7 @@ menu: --- # Example 1.1: MeVisLab Coordinate Systems -Three coordinate systems exist next to each other: +Three coordinate systems exist side by side: * World coordinates * Voxel coordinates * Device coordinates @@ -23,7 +23,7 @@ World coordinate systems in MeVisLab are always [right handed](https://en.wikipe The blue rectangle shows the same region in the three coordinate systems. -![Coordinate Systems in MeVisLab](images/tutorials/basicmechanics/GSExampleImageProcessing10b.png "Coordinate Systems in MeVisLab") +![Coordinate systems in MeVisLab](images/tutorials/basicmechanics/GSExampleImageProcessing10b.png "Coordinate systems in MeVisLab") ## World Coordinates World coordinates are: @@ -36,11 +36,11 @@ The origin of the world coordinate system can be anywhere and is not clearly def ### World Coordinates in MeVisLab You can show the world coordinates in MeVisLab by using the following example network: -![World Coordinates in MeVisLab](images/tutorials/basicmechanics/WorldCoordinates.png "World Coordinates in MeVisLab") +![World coordinates in MeVisLab](images/tutorials/basicmechanics/WorldCoordinates.png "World coordinates in MeVisLab") -The `ConstantImage` module generates an artificial image with a certain size, data type, and a constant fill value. The origin of the image is at the origin of the world coordinate system; therefore, the `SoCoordinateSystem` module shows the world coordinate system. In order to have a larger z-axis, open the panel of the `ConstantImage` module and set *IMage Size* for *Z* to *256*. +The `ConstantImage` module generates an artificial image with a certain size, data type, and a constant fill value. The origin of the image is at the origin of the world coordinate system; therefore, the `SoCoordinateSystem` module shows the world coordinate system. In order to have a larger z-axis, open the panel of the `ConstantImage` module and set *Image Size* for *Z* to *256*. -![ConstantImage Info](images/tutorials/basicmechanics/ConstantImageInfo.png "ConstantImage Info") +![ConstantImage info](images/tutorials/basicmechanics/ConstantImageInfo.png "ConstantImage info") Placing an object into the Open Inventor scene of the `SoExaminerViewer`, in this case a `SoCube` with *width*, *height*, and *depth* of 10, places the object to the origin of the world coordinate system. @@ -67,9 +67,9 @@ Voxel coordinates are: ### Voxel Coordinates in MeVisLab You can show the voxel coordinates in MeVisLab by using the following example network: -![Voxel Coordinates](images/tutorials/basicmechanics/VoxelCoordinates.png "Voxel Coordinates") +![Voxel coordinates](images/tutorials/basicmechanics/VoxelCoordinates.png "Voxel coordinates") -Load the file *Liver1_CT_venous.small.tif*. The `Info` module shows detailed information about the image loaded by the `LocalImage`. Opening the `SoExaminerViewer` shows the voxel coordinate system of the loaded image. You may have to change the LUT in `SoGVRVolumeRenderer`, so that the image looks better. +Load the file *Liver1_CT_venous.small.tif*. The `Info` module shows detailed information about the image loaded by the `LocalImage`. Opening the `SoExaminerViewer` shows the voxel coordinate system of the loaded image. You may have to change the LUT in `SoGVRVolumeRenderer` so that the image looks better. ![Voxel coordinates of the loaded image](images/tutorials/basicmechanics/SoExaminerViewer_Voxel.png "Voxel coordinates of the loaded image") @@ -83,7 +83,7 @@ You can change the scaling to 1 by adding a `Resample3D` module to the network: ![Resample3D](images/tutorials/basicmechanics/Resample3D.png "Resample3D") -![Image Info after Resampling](images/tutorials/basicmechanics/ImageInfo_AdvancedResampled.png "Image Info after Resampling") +![Image info after resampling](images/tutorials/basicmechanics/ImageInfo_AdvancedResampled.png "Image info after resampling") The voxel size is now 1. @@ -97,7 +97,7 @@ Replace the `SoGroup` module from the World Group in your network by a `SoSepara Opening the `SoExaminerViewer` shows the world coordinate system in white and the voxel coordinate system in yellow. -![World and Voxel coordinates](images/tutorials/basicmechanics/SoExaminerViewer_both.png "World and Voxel coordinates") +![World and voxel coordinates](images/tutorials/basicmechanics/SoExaminerViewer_both.png "World and voxel coordinates") On the yellow axis, we can see that the coordinate systems are located as already seen in the `Info` module *Advanced* tab. On the x-axis, the voxel coordinate origin is translated by -186.993 and on the y-axis, it is translated by -173.993. diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/coordinatesystems/coordinatesystems2.md b/mevislab.github.io/content/tutorials/basicmechanisms/coordinatesystems/coordinatesystems2.md index d54567cd6..b11538962 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/coordinatesystems/coordinatesystems2.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/coordinatesystems/coordinatesystems2.md @@ -22,9 +22,9 @@ World coordinates also refer to the patient axes. They are: * Right-handed * Not standardized regarding their origin -![World Coordinates in Context of the Human Body](images/tutorials/visualization/V2_00.png "World Coordinates in Context of the Human Body") +![World coordinates in context of the human body](images/tutorials/visualization/V2_00.png "World coordinates in context of the human body") -The Digital Imaging and Communications in Medicine (DICOM) standard defines a data format that groups information into data sets. This way, the image data is always kept together with all meta information like patient ID, study time, series time, acquisition data, etc. The image slice is represented by another tag with pixel information. +The Digital Imaging and Communications in Medicine (DICOM) standard defines a data format that groups information into data sets. This way, the image data is always kept together with all meta information like patient ID, study time, series time, acquisition data. The image slice is represented by another tag with pixel information. DICOM tags have unique numbers, encoded as two 16-bit numbers, usually shown in hexadecimal notation as two four-digit numbers (xxxx,xxxx). These numbers are the data group number and the data element number. @@ -55,7 +55,7 @@ The module `OrthoView2D` provides a 2D view displaying the input image in three As already learned in the previous example [1.1: MeVisLab Coordinate Systems](tutorials/basicmechanisms/coordinatesystems/coordinatesystems), world and voxel positions are based on different coordinate systems. Selecting the top left corner of any of your views will not show a world position of *(0, 0, 0)*. You can move the mouse cursor to the voxel position *(0, 0, 0)* as seen in the image information of the viewers in brackets *(x, y, z)*. The field worldPosition then shows the location of the image in world coordinate system (see `Info` module). -![OrthoView2D Voxel- and World Position](images/tutorials/basicmechanics/OrthoView2D_WorldPosition.png "OrthoView2D Voxel- and World Position") +![OrthoView2D voxel and world position](images/tutorials/basicmechanics/OrthoView2D_WorldPosition.png "OrthoView2D voxel and world position") Another option is to use the module `OrthoReformat3` that transforms the input image (by rotating and/or flipping) into the three main views commonly used: * Output 0: Sagittal view diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/dataimport.md b/mevislab.github.io/content/tutorials/basicmechanisms/dataimport.md index 7755800f0..3faa773ea 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/dataimport.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/dataimport.md @@ -29,12 +29,12 @@ These chapters explain the data formats and modules related to this example: Example files and images can be found in your MeVisLab installation directory under Packages > MeVisLab > Resources > DemoData {{}} -Detailed explanations on loading images onto your MeVisLab workspace can be found {{< docuLinks "/Resources/Documentation/Publish/SDK/GettingStarted/ch03.html" "here" >}} +Detailed explanations on loading images into your MeVisLab workspace can be found {{< docuLinks "/Resources/Documentation/Publish/SDK/GettingStarted/ch03.html" "here" >}} {{}} ## Images {#ImageImport} A good option to load images is the `ImageLoad` module. -![ImageLoad Module](images/tutorials/basicmechanics/ImageLoad.png "ImageLoad Module") +![ImageLoad module](images/tutorials/basicmechanics/ImageLoad.png "ImageLoad module") The `ImageLoad` module can import the following formats: * DICOM @@ -48,7 +48,7 @@ The `ImageLoad` module can import the following formats: * JPEG * MLImageFileFormat -Basic information of the imported images is available on the panel that opens via double-click. +Basic information of the imported images is available on the panel that opens via double-click {{< mousebutton "left" >}}. ## DICOM Data {#DICOMImport} {{}} @@ -56,27 +56,27 @@ Additional information about **Digital Imaging and Communications in Medicine (D {{< /alert >}} Even if the above explained `ImageLoad` is able to import DICOM data, a much better way is to use one of the specialized modules for DICOM images, such as `DicomImport`. -The `DicomImport` module allows to define a directory containing DICOM files to import as well as a list of files that can be dropped to the UI and imported. After import, the volumes are shown in a patient tree providing the following patient, study, series, and volume information (depending on the availability in the DICOM file(s)): +The `DicomImport` module allows defining a directory containing DICOM files to import as well as a list of files that can be dropped onto the UI and imported. After import, the volumes are shown in a patient tree providing the following patient, study, series, and volume information (depending on the availability in the DICOM file(s)): * **PATIENT LEVEL** Patient Name (0010,0010) - Patient Birthdate (0010,0030) * **STUDY LEVEL** Study Date (0008,0020) - Study Description (0008,1030) * **SERIES/VOLUME LEVEL** Modality (0008,0060) - Series Description (0008,103e) - Rows (0028,0010) - Columns (0028,0011) - number of slices in volume - number of timepoints in volume -![DicomImport Module](images/tutorials/basicmechanics/DicomImport.png "DicomImport Module") +![DicomImport module](images/tutorials/basicmechanics/DicomImport.png "DicomImport module") ### Configuration -The `DicomImport` module generates volumes based on the **Dicom Processor Library (DPL)** that allows to define sorting and partitioning options. +The `DicomImport` module generates volumes based on the **Dicom Processor Library (DPL)** that allows defining sorting and partitioning options. -![DicomImport Sort Part Configuration](images/tutorials/basicmechanics/DicomImportSortPart.png "DicomImport Sort Part Configuration") +![DicomImport sort/part configuration](images/tutorials/basicmechanics/DicomImportSortPart.png "DicomImport sort/part configuration") ### DicomTree Information In order to get all DICOM tags from your currently imported and selected volume, you can connect the `DicomImport` module to a `DicomTagBrowser`. -![DicomTagBrowser Module](images/tutorials/basicmechanics/DicomTagBrowser.png "DicomTagBrowser Module") +![DicomTagBrowser module](images/tutorials/basicmechanics/DicomTagBrowser.png "DicomTagBrowser module") -In MeVisLab versions later than 4.2.0, the *Output Inspector* provides the option to show the DICOM tags of the currently selected output directly. You do not need to add a separate `DicomTagBrowser` module anymore. +In MeVisLab versions later than 4.2.0, the Output Inspector provides the option to show the DICOM tags of the currently selected output directly. You do not need to add a separate `DicomTagBrowser` module anymore. -![DICOM Information in Output Inspector](images/tutorials/basicmechanics/OutputInspectorDICOM.png "DICOM Information in Output Inspector") +![DICOM Information in the Output Inspector](images/tutorials/basicmechanics/OutputInspectorDICOM.png "DICOM information in the Output Inspector") ## Segmentations / 2D Contours {#2DContours} Two-dimensional contours in MeVisLab are handled via *CSO*s (**C**ontour **S**egmentation **O**bjects). @@ -116,13 +116,13 @@ The module `WEMLoad` loads different 3D mesh file formats, for example: * VRML (*.wrl*) * Winged Edge Mesh (*.wem*) -![WEMLoad Module](images/tutorials/basicmechanics/WEMLoad.png "WEMLoad Module") +![WEMLoad module](images/tutorials/basicmechanics/WEMLoad.png "WEMLoad module") WEMs can be rendered via Open Inventor by using the modules `SoExaminerViewer` or `SoRenderArea` and `SoCameraInteraction`. Before visualizing a WEM, it needs to be converted to a scene object via `SoWEMRenderer`. -![SoWEMRenderer Module](images/tutorials/basicmechanics/SoWEMRenderer.png "SoWEMRenderer Module") +![SoWEMRenderer module](images/tutorials/basicmechanics/SoWEMRenderer.png "SoWEMRenderer module") {{}} Tutorials for WEMs are available [here](../../dataobjects/surfaces/surfaceobjects). @@ -135,7 +135,7 @@ The `SoSceneLoader` module is able to load external 3D formats. MeVisLab uses th Supported file formats of the assimp library are documented on their [website](https://github.com/assimp/assimp/blob/master/doc/Fileformats.md). {{}} -![SoSceneLoader Module](images/tutorials/basicmechanics/SoSceneLoader.png "SoSceneLoader Module") +![SoSceneLoader module](images/tutorials/basicmechanics/SoSceneLoader.png "SoSceneLoader module") The {{< docuLinks "/../MeVisLab/Standard/Documentation/Publish/ModuleReference/SoSceneLoader.html" "SoSceneLoader" >}} module generates a 3D scene from your loaded files that can be rendered via {{< docuLinks "/../MeVisLab/Standard/Documentation/Publish/ModuleReference/SoExaminerViewer.html" "SoExaminerViewer" >}} or {{< docuLinks "/../MeVisLab/Standard/Documentation/Publish/ModuleReference/SoRenderArea.html" "SoRenderArea" >}} and {{< docuLinks "/../MeVisLab/Standard/Documentation/Publish/ModuleReference/SoCameraInteraction.html" "SoCameraInteraction" >}} diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules.md index 89cb286fe..9affa48b6 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules.md @@ -26,7 +26,7 @@ The internal network of a macro module is saved in an *.mlab* file, often referr You have two main options for developing a macro module: -* **With Internal Networks**: Use a macro module to reuse a network of modules. For example, if you build a network that applies a specific image filter and you want to use this setup in multiple projects, you can wrap the entire network into a single macro module. This way, you don’t need to manually reconnect all the individual modules each time — you just use your macro module. You can also add inputs and outputs to connect your internal network with other modules. +* **With Internal Networks**: Use a macro module to reuse a network of modules. For example, if you built a network that applies a specific image filter and you want to use this setup in multiple projects, you can wrap the entire network into a single macro module. This way, you don’t need to manually reconnect all the individual modules each time — you just use your macro module. You can also add inputs and outputs to connect your internal network with other modules. An example can be found in chapter [Basic Mechanics of MeVisLab (Example: Building a Contour Filter)](tutorials/basicmechanisms#TutorialMacroModules). @@ -38,7 +38,7 @@ A typical example for macro modules without an internal network is the execution It is also possible to combine both approaches. You can add internal networks and additionally write Python code for user interaction and processing. - ![Internal Processing and Python Interaction](images/tutorials/basicmechanics/with.png "Internal Processing and Python Interaction") + ![Internal processing and Python interaction](images/tutorials/basicmechanics/with.png "Internal processing and Python interaction") ### Benefits of Macro Modules * **Encapsulation:** @@ -59,7 +59,7 @@ They are often used to encapsulate dynamic user interfaces built with scripting, ### Scope of Macro Modules #### Local Macro Module -A Local Macro module in MeVisLab exists within the context of the current network document - i.e., it’s defined *locally* rather than being installed into the global module database. It does not require a package. It lives inside the directory of the current network file (*.mlab*) you’re working on. +A Local Macro module in MeVisLab exists within the context of the current network document — i.e., it’s defined *locally* rather than being installed into the global module database. It does not require a package. It lives inside the directory of the current network file (*.mlab*) you’re working on. * A local macro is visible and editable in the directory of your current network. * A local macro is not listed in the Modules panel and module search. @@ -90,24 +90,24 @@ Data input connectors, represented by triangles for ML images, half-circles for #### Outputs Output connectors provide the results of the processing performed by their internal networks. These outputs can then be connected to the inputs of other modules. -Data Outputs (triangle, half-circle, square) provide the processed data from the internal network or Python file. The type of data an output provides depends on the outputs of the modules within the macro that are connected to this output. +Data outputs (triangle, half-circle, square) provide the processed data from the internal network or Python file. The type of data an output provides depends on the outputs of the modules within the macro that are connected to this output. #### Parameter Fields -Parameter Fields allow users to control the behavior of the internal network. They can be connected to the parameters/fields of other modules or manually adjusted by the user. They also allow other modules to read values or states from within the encapsulated network or Python file. +Parameter fields allow users to control the behavior of the internal network. They can be connected to the parameters/fields of other modules or manually adjusted by the user. They also allow other modules to read values or states from within the encapsulated network or Python file. You have two options when adding fields to your macro module: * **Define your own fields:** You can define your own fields by specifying their name, type, and default value in the *.script* file. This allows you to provide custom parameters for your macro module, tailored to your specific needs. These parameters can be use as input from the user or output from the modules processing. * **Reuse fields from the internal network:** Instead of defining your own field, you can expose an existing field from one of the modules of your internal network. To do this, you reference the internalName of the internal field you want to reuse. This makes the internal field accessible at the macro module level, allowing users to interact with it directly without duplicating parameters. Changes of the field value are automatically applied in your internal network. -![Inputs, Outputs, and Fields](images/tutorials/basicmechanics/fields.png "Inputs, Outputs, and Fields") +![Inputs, outputs, and fields](images/tutorials/basicmechanics/fields.png "Inputs, outputs, and fields") ### Files Associated with a Macro Module Macro modules typically need the following files: * **Definition file (*.def*):** The module definition file contains the definition and information about the module like name, author, or package. **Definition files are only available for global macro modules**. * **Script file (*.script*):** The script file defines inputs, outputs, parameter fields, and the user interface of the macro module. In the case you want to add Python code, it includes the reference to the Python file. The *.script* file allows you to define short Python functions to be called on field changes and user interactions. -![user interface and the internal interface](images/tutorials/basicmechanics/mycountourFilter.png "user interface and the internal interface") +![User interface and the internal interface](images/tutorials/basicmechanics/mycountourFilter.png "User interface and the internal interface") * **Python file (*.py*):** *(Optional)* The Python file contains the Python code that is used by the module. See section [Python functions and Script files](tutorials/basicmechanisms/macromodules#PythonAndScripts) for different options to add Python functions to user interactions. * **Internal network file (*.mlab*):** *(Optional)* Stores the internal network of the module if available. This file essentially defines the macro module's internal structure and connections. @@ -127,7 +127,7 @@ Field listeners are mechanisms to execute Python code automatically any time the You can define field listeners within the *Commands* sections of the *.script* file. You get a reference to the field object and then use a method to add a callback function that will be executed when the field's value is modified. -For an example see [Example 2.5.2: Module interactions via Python scripting](tutorials/basicmechanisms/macromodules/scriptingexample2/). +For an example, see [Example 2.5.2: Module Interactions via Python Scripting](tutorials/basicmechanisms/macromodules/scriptingexample2/). ## Summary * Macro modules allow you to add your own functionality to MeVisLab. You can add inputs and outputs and connect existing modules to your new macro module. diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/globalmacromodules.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/globalmacromodules.md index a1ce2fc38..b9a93eddb 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/globalmacromodules.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/globalmacromodules.md @@ -8,7 +8,7 @@ tags: ["Beginner", "Tutorial", "Macro", "Macro modules", "Global Macro"] menu: main: identifier: "globalmacromodules" - title: "Creation of Global Macro Modules From a Local Macro Using the Project Wizard" + title: "Creation of Global Macro Modules from a Local Macro Using the Project Wizard" weight: 390 parent: "macro_modules" --- @@ -18,7 +18,7 @@ menu: {{< youtube "M4HnA0d1V5k">}} ## Introduction -In this chapter you will learn how to create global macro modules. There are many ways to do this. You can convert local macros into global macro modules or you can directly create global macro modules using the *Project Wizard*. In contrast to local macro modules, global macro modules are commonly available throughout projects and can be found via module search and under {{< menuitem "Modules" >}}. +In this chapter, you will learn how to create global macro modules. There are many ways to do this. You can convert local macros into global macro modules or you can directly create global macro modules using the *Project Wizard*. In contrast to local macro modules, global macro modules are commonly available throughout projects and can be found via module search and under {{< menuitem "Modules" >}}. ## Steps to Do @@ -77,7 +77,7 @@ Instead of converting a local macro module into a global macro module, you can a Make sure to choose *Directory Structure* as *self-contained*. This ensures that all files of your module are stored in a single directory. {{}} - Press *Next >* to edit further properties. You have the opportunity to directly define the internal network of the macro module, for example, by copying an existing network. In this case, we could copy the network of the local macro module `Filter` we already created. In addition, you have the opportunity to directly create a Python file. Python scripting can be used for the implementation of module interactions and other module functionalities. More information about Python scripting can be found [here](./tutorials/basicmechanisms/macromodules/pythonscripting). + Click Next > to edit further properties. You have the opportunity to directly define the internal network of the macro module, for example, by copying an existing network. In this case, we could copy the network of the local macro module `Filter` we already created. In addition, you have the opportunity to directly create a Python file. Python scripting can be used for the implementation of module interactions and other module functionalities. More information about Python scripting can be found [here](./tutorials/basicmechanisms/macromodules/pythonscripting). {{< imagegallery 2 "images" "ProjectWizard1" "ProjectWizard2" >}} diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/guidesign.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/guidesign.md index 1e16ab59d..dbc9b2c83 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/guidesign.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/guidesign.md @@ -18,7 +18,7 @@ menu: {{< youtube "tdQUkkROWBg">}} ## Introduction -This chapter will give you an introduction into the creation of module panels and user +This chapter gives you an introduction into the creation of module panels and user interfaces. For the implementation, you will need to use the {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html" "MeVisLab Definition Language (MDL)">}}. @@ -36,7 +36,7 @@ In [Example 2.2](tutorials/basicmechanisms/macromodules/globalmacromodules) we c The *Automatic Panel* contains fields, as well as module inputs and outputs. In this case, no fields exists except the instanceName. Accordingly, there is no possibility to interact with the module. Only the input and the output of the module are given. -![Automatic Panel](images/tutorials/basicmechanics/GUI_10.png "Automatic Panel") +![Automatic panel](images/tutorials/basicmechanics/GUI_10.png "Automatic panel") To add and edit a panel, open the context menu and select {{< menuitem "Related Files" "Filter.script" >}}. The text editor {{< docuLinks "/Resources/Documentation/Publish/SDK/MeVisLabManual/ch26.html" "MATE">}} opens. You can see the file *Filter.script*, which you can edit to define a custom user interface for the module. @@ -65,7 +65,7 @@ Interface { {{}} ##### Module Inputs and Outputs -To create an input/output, you need to define a *Field* in the respective input/output section. Each input/output gets a name (here input0/output0) that you can use to reference this field. The module input maps to an input of the internal network. You need to define this mapping. In this case, the input of the macro module `Filter` maps to the input of the module `Convolution` of the internal network (internalName = Convolution.input0). Similarly, you need to define which output of the internal network maps to the output of the macro module `Filter`. In this example, the output of the internal module `Arithmethic2` maps to the output of our macro module `Filter` (internalName = Arithmetic2.output0). +To create an input/output, you need to define a *Field* in the respective input/output section. Each input/output gets a name (here input0 and output0) that you can use to reference this field. The module input maps to an input of the internal network. You need to define this mapping. In this case, the input of the macro module `Filter` maps to the input of the module `Convolution` of the internal network (internalName = Convolution.input0). Similarly, you need to define which output of the internal network maps to the output of the macro module `Filter`. In this example, the output of the internal module `Arithmethic2` maps to the output of our macro module `Filter` (internalName = Arithmetic2.output0). Creating an input/output causes: 1. Input/output connectors are added to the module. @@ -73,7 +73,7 @@ Creating an input/output causes: 3. Input/output fields are added to the automatic panel. 4. A description of the input/output fields is automatically added to the module help file, when opening the *.mhelp* file after input/output creation. Helpfile creation is explained in [Example 2.3](tutorials/basicmechanisms/macromodules/helpfiles/). -![Internal Network of your macro module](images/tutorials/basicmechanics/BM_23.png "Internal Network of your macro module") +![Internal network of your macro module](images/tutorials/basicmechanics/BM_23.png "Internal network of your macro module") ##### Module Fields In the *Parameters* section, you can define *fields* of your macro module. These fields may map to existing fields of the internal network (internalName = ...), but they do not need to and can also be completely new. You can reference these fields when creating a panel, to allow interactions with these fields. All fields appear in the *Automatic Panel*. @@ -81,6 +81,10 @@ In the *Parameters* section, you can define *fields* of your macro module. These ### Module Panel Layout To create your own user interface, we need to create a {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#mdl_Window" "Window" >}}. A window is one of the layout elements that exist in MDL. These layout elements are called {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#Controls" "controls" >}}. The curled brackets define the window section, in which you can define properties of the window and insert further controls like a {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#mdl_Box" "Box" >}}. +{{}} +We use *Category* as the top-level layouter in the *Window* to give the inner content a small margin. Otherwise, the controls touch the border of the window and look unappealing. +{{}} + Initially, we call the window *MyWindowTitle*, which can be used to reference this window. Double-clicking {{< mousebutton "left" >}} on your module now opens your first self-developed user interface. @@ -103,16 +107,16 @@ Interface { } Window MyWindowName { - Category { - title = MyWindowTitle + title = MyWindowTitle + Category { Box MyBox {} } } ``` {{}} -![Module Panel](images/tutorials/basicmechanics/ModulePanel.png "Module Panel") +![Module panel](images/tutorials/basicmechanics/ModulePanel.png "Module panel") You can define different properties of your control. For a window, you can, for example, define a title, or whether the window should be shown in full screen (*fullscreen = Yes*). @@ -132,8 +136,8 @@ the following examples: ```Stan Window MyWindowName { title = MyWindowTitle - w = 100 - h = 50 + w = 100 + h = 50 Category { Vertical { Box MyBox { @@ -148,14 +152,14 @@ Window MyWindowName { ``` {{}} -![Vertical layout of Box and Text](images/tutorials/basicmechanics/VerticalLayout.png "Vertical layout of Box and Text") +![Vertical layout of Box and Label](images/tutorials/basicmechanics/VerticalLayout.png "Vertical layout of Box and Label") {{< highlight filename="Filter.script" >}} ```Stan Window MyWindowName { title = MyWindowTitle - w = 100 - h = 50 + w = 100 + h = 50 Category { Horizontal { Box MyBox { @@ -170,7 +174,7 @@ Window MyWindowName { ``` {{}} -![Horizontal layout of Box and Text](images/tutorials/basicmechanics/HorizontalLayout.png "Horizontal layout of Box and Text") +![Horizontal layout of Box and Label](images/tutorials/basicmechanics/HorizontalLayout.png "Horizontal layout of Box and Label") There are much more controls that can be used. For example, a CheckBox, @@ -182,7 +186,7 @@ a Table, a Grid, or a Button. To find out more, take a look into the {{< docuLin Until now, we learned how to create the layout of a panel. As a next step, we like to get an overview over interactions. {{}} -You can add the module `GUIExample` to your workspace and play around with is. +You can add the module `GUIExample` to your workspace and play around with it. {{}} #### Access to Existing Fields of the Internal Network @@ -190,9 +194,9 @@ To interact with fields of the internal network in your user interface, we need Then, open the panel of the module `Convolution` and right-click {{< mousebutton "right" >}} the field title *Use* of the box *Predefined Kernel* and select *Copy Name*. You now copied the internal network name of the field to your clipboard. The name is made up of *ModuleName.FieldName*, in this case Convolution.predefKernel. -![Convolution Module](images/tutorials/basicmechanics/Convolution.png "Convolution Module") +![Convolution module](images/tutorials/basicmechanics/Convolution.png "Convolution module") -In the panel of the module `Convolution`, you can change this variable *Kernel* via a drop-down menu. In MDL, a drop-down menu is called a {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#mdl_ComboBox" "ComboBox" >}}. We can take over the field predefKernel, its drop-down menu and all its properties by creating a new field in our panel and reference to the internal field Convolution.predefKernel, which already exist in the internal network. +In the panel of the module `Convolution`, you can change this variable *Kernel* via a drop-down menu. In MDL, a drop-down menu is called a {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#mdl_ComboBox" "ComboBox" >}}. We can take over the field predefKernel, its drop-down menu, and all its properties by creating a new field in our panel and reference to the internal field Convolution.predefKernel, which already exists in the internal network. Changes of the properties of this field can be done in the curled brackets using tags (here, we changed the title). @@ -239,6 +243,7 @@ Interface { Window MyWindowName { title = MyWindowTitle + Category { Field kernel {} } @@ -247,14 +252,14 @@ Window MyWindowName { {{}} #### Commands -We not only can use existing functionalities, but also add new interactions via Python scripting. +We can not only use existing functionality but also add new interactions via Python scripting. In the example below, we added a *wakeupCommand* to the Window and a simple *command* to the Button. {{< highlight filename="Filter.script" >}} ```Stan Window MyWindowName { - title = MyWindowTitle + title = MyWindowTitle wakeupCommand = myWindowCommand Category { @@ -266,7 +271,7 @@ Window MyWindowName { ``` {{}} -The *wakeupCommand* defines a Python function that is executed as soon as the Window is opened. The Button *command* is executed when the user clicks {{< mousebutton "left" >}} on the Button. +The *wakeupCommand* defines a Python function that is executed as soon as the Window is opened. The Button's *command* is executed when the user clicks {{< mousebutton "left" >}} on the Button. Both commands reference a Python function that is executed whenever both actions (open the Window or click the Button) are executed. @@ -289,7 +294,7 @@ The section *Source* should already be available and generated automatically in [//]: <> (MVL-653) -You can right-click {{< mousebutton "right" >}} on the command (*myWindowCommand* or *myButtonAction*) in your *.script* file and select {{< menuitem "Create Python Funtion......" >}}. The text editor MATE opens automatically and generates an initial Python function for you. You can simply add a logging function or implement complex logic here. +You can right-click {{< mousebutton "right" >}} on the command (myWindowCommand or myButtonAction) in your *.script* file and select {{< menuitem "Create Python Funtion......" >}}. The text editor MATE opens automatically and generates an initial Python function for you. You can simply add a logging function or implement complex logic here. **Example:** {{< highlight filename="Filter.py" >}} diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/helpfiles.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/helpfiles.md index 0cee8f097..3ecb90c2d 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/helpfiles.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/helpfiles.md @@ -14,7 +14,7 @@ menu: --- # Example 2.3: Creation of Module Help -Generating help of a macro module is part of the video about macro modules from [Example 2: Creation of global macro modules](tutorials/basicmechanisms/macromodules/globalmacromodules) +Generating help of a macro module is part of the video about macro modules from [Example 2: Creation of Global Macro Modules](tutorials/basicmechanisms/macromodules/globalmacromodules) {{< youtube "M4HnA0d1V5k">}} ## Introduction @@ -29,21 +29,25 @@ We will start by creating a help file using the built-in text editor {{< docuLin [//]: <> (MVL-653) -![Creation of module help](images/tutorials/basicmechanics/GUI_06.png "Creation of module help") +![Creation of module help: context menu](images/tutorials/basicmechanics/GUI_06.png "Creation of module help: context menu") {{< docuLinks "/Resources/Documentation/Publish/SDK/MeVisLabManual/ch26.html" "MeVisLab MATE">}} opens. An *.mhelp* file (*Filter.mhelp*) is created automatically and stored in the folder your macro module `Filter` is stored in. You can find the folder structure in MATE on the left side. Editing the text field, you can edit the help file. [//]: <> (MVL-653) -![Edit module help file via MATE](images/tutorials/basicmechanics/GUI_07.png "Edit module help file via MATE") +![Edit module help file via MATE: IDE](images/tutorials/basicmechanics/GUI_07.png "Edit module help file via MATE: IDE") When creating the help file of a module, all important information of the module down to the field specifications are extracted and created automatically. Thus, the basic module information is always available in the module help. Additional documentation should be added by the module's author. On the left side, you can find the outline of the help file. Each section can be edited. In this example, we added the purpose of the module to the help file. -![Edit module help file via MATE](images/tutorials/basicmechanics/GUI_08.png "Edit module help file via MATE") +![Edit module help file via MATE: filled out chapter](images/tutorials/basicmechanics/GUI_08.png "Edit module help file via MATE: filled out chapter") MATE offers the possibility to format the text. By using the button *M*, module names can be formatted in such a way that links to the respective help file of the modules are created. -![Edit module help file via MATE](images/tutorials/basicmechanics/GUI_08_2.png "Edit module help file via MATE") +![Edit module help file via MATE: using a format role](images/tutorials/basicmechanics/GUI_08_2.png "Edit module help file via MATE: using a format role") + +{{}} +To be safe against renaming, it is best to use *:module:\`this\`*. When generating the help file's HTML, the keyword *this* is automatically replaced by the current module's name. +{{}} After finishing your documentation, you can click *Generate Help* or {{< keyboard "F7" >}} and your final help file is generated. @@ -63,15 +67,15 @@ Depending on the way the macro module was created, more or less features are aut {{}} ### Creation of an Example Network -To add an example network to your module, you need to add a reference to the respective *.mlab* file to the module definition file (*.def*). Open the file *Filter.def*. You can find the line *exampleNetwork = "$(LOCAL)/networks/FilterExample.mlab"*, which defines the reference to the *.mlab* file containing the example network. By default, the name of the example network is *ModulenameExample.mlab*. An *.mlab* file containing only the module `Filter` is created inside the folder *networks*. +To add an example network to your module, you need to add a reference to the respective *.mlab* file to the module definition file (*.def*). Open the file *Filter.def*. You can find the line exampleNetwork = "$(LOCAL)/networks/FilterExample.mlab", which defines the reference to the *.mlab* file containing the example network. By default, the name of the example network is *ModulenameExample.mlab*. An *.mlab* file containing only the module `Filter` is created inside the folder *networks*. It is possible that the reference to the example network or the file *FilterExample.mlab* is missing. One reason could be that its creation was not selected when creating the macro module. In this case, add the reference and the file manually. -![Reference to Example Network](images/tutorials/basicmechanics/ExpNetwork_01.png "Reference to Example Network") +![Reference to the example network](images/tutorials/basicmechanics/ExpNetwork_01.png "Reference to the example network") To create the example network, open the file *FilterExample.mlab* in MeVisLab and create an appropriate example. -![Example Network](images/tutorials/basicmechanics/ExpNetwork_02.png "Example Network") +![Example network](images/tutorials/basicmechanics/ExpNetwork_02.png "Example network") ## Summary * {{< docuLinks "/Resources/Documentation/Publish/SDK/MeVisLabManual/ch26.html" "MeVisLab MATE">}} is a built-in text editor that can be used to create module help files and module panels, or to create module interactions via Python scripting. diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/itemmodelview.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/itemmodelview.md index 1fde27d0b..2fffaa6f0 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/itemmodelview.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/itemmodelview.md @@ -42,7 +42,7 @@ If you cannot find your module via *Module Search*, reload module cache by click ### Define the Necessary Fields Add your new module `MyItemModelView` to your workspace. It does not provide a user interface and you do not have any *Fields* available. -![Empty Module](images/tutorials/basicmechanics/ItemModel_4.png "Empty Module") +![Empty module](images/tutorials/basicmechanics/ItemModel_4.png "Empty module") Open the *.script* file of your module via right-click {{< mousebutton "right" >}} and {{< menuitem "Related Files (4)" "MyItemModelView.script" >}}. @@ -99,10 +99,10 @@ Interface { If you now open your panel, you should see the *Input* inImage and the just created *Fields*. The *Field* id is necessary to identify unique objects in your *ItemModel* later. In order to make this example easier to understand, we defined all types of the *Fields* as *String*. You can also use different types, if you like. -![Module Input and Fields](images/tutorials/basicmechanics/ItemModel_5.png "Module Input and Fields") +![Module input and fields](images/tutorials/basicmechanics/ItemModel_5.png "Module input and fields") ### Add the ItemModelView to Your Panel -We can now add the *ItemModelView* to our panel and define the columns of the view, that we want to see. Add a *Window* section to your script file and define it as seen below. +We can now add the *ItemModelView* to our panel and define the columns of the view, that we want to see. Add a *Window* section to your *.script* file and define it as seen below. {{< highlight filename="MyItemModelView.script" >}} ```Stan @@ -142,7 +142,7 @@ Outputs { Your module now also shows an output *MLBase* object and the columns you defined for the *ItemModelView*. -![Module Output and Columns](images/tutorials/basicmechanics/ItemModel_6.png "Module Output and Columns") +![Module output and columns](images/tutorials/basicmechanics/ItemModel_6.png "Module output and columns") ### Fill Your Table with Data We want to get the necessary information from the defined input image inImage. We want the module to update the content whenever the input image changes. Therefore, we need a *Field Listener* calling a Python function whenever the input image changes. Add it to your *Commands* section. @@ -159,7 +159,7 @@ Commands { ``` {{}} -Whenever the input image changes, the Python function *imageChanged* is executed. Right-click on the {{< mousebutton "right" >}} *imageChanged* and select {{< menuitem "Create Python Function 'imageChanged'" >}}. MATE automatically opens the Python file and creates the function. +Whenever the input image changes, the Python function imageChanged is executed. Right-click on the {{< mousebutton "right" >}} imageChanged and select {{< menuitem "Create Python Function 'imageChanged'" >}}. MATE automatically opens the Python file and creates the function. Before implementing the Python function, we have to add necessary imports and global parameters. @@ -183,7 +183,7 @@ We need to import *mevis.MLAB* and we define the attributes of our resulting vie The unique *id* is an increasing *Integer* and we can now initialize our model. #### Implement the Model -In Python, we have to define some basic classes and functions for our final model. Define a class *MyItem* which represents a single item. Each item may have children of the same type to provide a hierarchical structure. +In Python, we have to define some basic classes and functions for our final model. Define a class *MyItem*, which represents a single item. Each item may have children of the same type to provide a hierarchical structure. {{< highlight filename="MyItemModelView.py" >}} ```Python @@ -282,7 +282,7 @@ Window { {{}} #### Fill the Model With Your Data -Now, we can implement the function *imageChanged*. +Now, we can implement the function imageChanged. {{< highlight filename="MyItemModelView.py" >}} ```Python @@ -359,12 +359,12 @@ The image data is then used to create the root item of our model. We use the sel If you now open the panel of your module, you can already see the results. -![Module Panel](images/tutorials/basicmechanics/ItemModel_7.png "Module Panel") +![Module panel](images/tutorials/basicmechanics/ItemModel_7.png "Module panel") -The first line shows the information of the patient, the study and the series and each child item represents a single slice of the image. +The first line shows the information of the patient, the study and the series, and each child item represents a single slice of the image. ## Interact With Your Model -We can now add options to interact with the *ItemModelView*. Open the *.script* file of your module and go to the *Commands* section. We add a *FieldListener* to our selection field. Whenever the user selects a different item in our view, the Python function *itemClicked* in the *FieldListener* is executed. +We can now add options to interact with the *ItemModelView*. Open the *.script* file of your module and go to the *Commands* section. We add a *FieldListener* to our selection field. Whenever the user selects a different item in our view, the Python function itemClicked in the *FieldListener* is executed. {{< highlight filename="MyItemModelView.script" >}} ```Stan @@ -378,7 +378,7 @@ Commands { ``` {{}} -Before adding the new Python function, we need a function in our model that returns the values of items from our model. Implement the function *getItemByID* in our model the following way: +Before adding the new Python function, we need a function in our model that returns the values of items from our model. Implement the function getItemByID in our model the following way: {{< highlight filename="MyItemModelView.py" >}} ```Python @@ -402,13 +402,13 @@ def itemClicked(field: "mevislab.MLABField"): ``` {{}} -The *itemClicked* function uses *id* from the selected item to get the value of column 8 (in this case, it is the *SOP Instance UID* of the image) and prints this value. +The itemClicked function uses *id* from the selected item to get the value of column 8 (in this case, it is the *SOP Instance UID* of the image) and prints this value. -![Clicked Item](images/tutorials/basicmechanics/ItemModel_8.png "Clicked Item") +![Clicked item](images/tutorials/basicmechanics/ItemModel_8.png "Clicked item") The problem is that the *Field* selection also changes whenever a new item is added to the model. Your debug output is already flooded with SOP Instance UIDs without interaction. -![Debug Output](images/tutorials/basicmechanics/ItemModel_9.png "Debug Output") +![Debug output](images/tutorials/basicmechanics/ItemModel_9.png "Debug output") Add another global parameter to your Python script to prevent the *FieldListener* from executing during the *imageChanged* event. @@ -430,11 +430,11 @@ def itemClicked(field: "mevislab.MLABField"): ``` {{}} -While the *imageChanged* function is executed, the parameter is set to *False* and the *itemClicked* function does not print anything. +While the imageChanged function is executed, the parameter is set to *False* and the itemClicked function does not print anything. ## Summary * *ItemModelViews* allow you to define your own abstract hierarchical item model with generically named attributes. -* This model can be provided as Output and added to the Panel of your module. +* This model can be provided as output and added to the panel of your module. * Interactions with the model can be implemented by using a *FieldListener*. {{< networkfile "examples/basic_mechanisms/Modules.zip" >}} diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/package.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/package.md index 27ef795ff..ccb88aeeb 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/package.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/package.md @@ -40,7 +40,7 @@ Next you need to: 3. Select the path your package group is supposed to be stored in. If you like to add a package to an existing package group, select its name - and chose the path the package group is stored in. + and choose the path the package group is stored in. If you now create the package, you can find a folder structure in the desired directory. The folder of your package group contains the folder diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythondebugger.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythondebugger.md index ebda6360d..7adc16764 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythondebugger.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythondebugger.md @@ -21,9 +21,9 @@ menu: MeVisLab provides the powerful integrated text editor MATE. By default, MATE is used to create/edit files like Python scripts. In this tutorial, we want to show you how to debug Python scripts in MeVisLab. ## Prepare Your Network -We are using a very simple network of predefined modules, but you can also debug your self-written Python scripts. Add a `LocalImage` module to your workspace and connect it to a `DicomTagBrowser` module. The `DicomTagBrowser` module shows a table containing the DICOM tags of your currently opened file. +We are using a very simple network of predefined modules but you can also debug your self-written Python scripts. Add a `LocalImage` module to your workspace and connect it to a `DicomTagBrowser` module. The `DicomTagBrowser` module shows a table containing the DICOM tags of your currently opened file. -![Example Network](images/tutorials/basicmechanics/Debug1.png "Example Network") +![Example network](images/tutorials/basicmechanics/Debug1.png "Example network") ## Open Python Script in MATE To debug our module, we need to open the Python file. Right-click {{< mousebutton "right" >}} the module `DicomTagBrowser` and select {{< menuitem "Related Files (3)" "DicomTagBrowser.py" >}}. The file is opened in MATE. @@ -55,19 +55,19 @@ First we need to enable debugging. In the MATE main menu, select {{< menuitem "D ### Debugging Panel The *Debugging* panel allows you to step through your code. -![Debugging Panel](images/tutorials/basicmechanics/Debug3.png "Debugging Panel") +![Debugging panel](images/tutorials/basicmechanics/Debug3.png "Debugging panel") ### Stack Frames Panel The *Stack Frames* panel shows your current stack trace while debugging. -![Stack Frames](images/tutorials/basicmechanics/Debug4.png "Stack Frames") +![Stack frames](images/tutorials/basicmechanics/Debug4.png "Stack frames") ### Variables/Watches/Evaluate Expression Panel Another panel *Variables/Watches/Evaluate Expression* appears, where you can see all current local and global variables. Add your own variables to watch their current value and evaluate your own expressions. -![Variables/Watches/Evaluate Expression](images/tutorials/basicmechanics/Debug5.png "Variables/Watches/Evaluate Expression") +![Variables/Watches/Evaluate Expression panel](images/tutorials/basicmechanics/Debug5.png "Variables/Watches/Evaluate Expression panel") -Scroll to line 180 and left click {{< mousebutton "left" >}} on the line number. +Scroll to line 180 and left-click {{< mousebutton "left" >}} on the line number. {{< highlight >}} ```Python @@ -80,13 +80,13 @@ Scroll to line 180 and left click {{< mousebutton "left" >}} on the line number. You can see a red dot marking a break point for debugging. Whenever this line of code is executed, execution will stop here and you can evaluate your variables. This line will be reached whenever you right-click {{< mousebutton "right" >}} on the list in the `DicomTagBrowser` module and select {{< menuitem "Copy Tag Name" >}}. -Go back to MeVisLab and right click {{< mousebutton "right" >}} on any DICOM tag in the `DicomTagBrowser` module. Select {{< menuitem "Copy Tag Name" >}}. +Go back to MeVisLab and right-click {{< mousebutton "right" >}} on any DICOM tag in the `DicomTagBrowser` module. Select {{< menuitem "Copy Tag Name" >}}. -![Copy Tag Name](images/tutorials/basicmechanics/Debug6.png "Copy Tag Name") +![Copy tag name](images/tutorials/basicmechanics/Debug6.png "Copy tag name") MATE opens automatically and you can see an additional yellow arrow indicating the line about to be executed next. -![MATE Debugger](images/tutorials/basicmechanics/Debug7.png "MATE Debugger") +![MATE debugger](images/tutorials/basicmechanics/Debug7.png "MATE debugger") You can now use the controls of the *Debugging* panels to step through your code or just continue execution of your code. Whenever your execution is stopped, you can use the *Stack Frames* and the *Variables/Watches/Evaluate Expression* panel to see the current value of all or just watched variables. @@ -112,14 +112,14 @@ The *Variables* panel now shows all currently available local and global variabl ![Variables/Watches panel](images/tutorials/basicmechanics/Debug7a.png "Variables/Watches panel") ## Conditions for Breakpoints -You can also define conditions for your breakpoints. Remove breakpoint in line 180 and set a new one in line 181. In the case you only want to stop the execution of your script if a specific condition is met, right click {{< mousebutton "right" >}} on your breakpoint and select {{< menuitem "Set Condition for Breakpoint" >}}. A dialog opens where you can define your condition. Enter **item.text(1) == 'SOPClassUID'** as condition. +You can also define conditions for your breakpoints. Remove breakpoint in line 180 and set a new one in line 181. In the case you only want to stop the execution of your script if a specific condition is met, right-click {{< mousebutton "right" >}} on your breakpoint and select {{< menuitem "Set Condition for Breakpoint" >}}. A dialog opens where you can define your condition. Enter **item.text(1) == 'SOPClassUID'** as condition. -![Conditions for Breakpoints](images/tutorials/basicmechanics/Debug8.png "Conditions for Breakpoints") +![Conditions for breakpoints](images/tutorials/basicmechanics/Debug8.png "Conditions for breakpoints") Now, the code execution is only stopped if you copy the tag name *SOPClassUID*. In the case another line is copied, the execution does not stop and just continues. ## Evaluate Expression -The *Evaluate Expression* tab allows you to modify variables during execution. In our example you can set the result **item.text(1)** to something like **item.setText(1, "Hello")**. If you now step to the next line via {{< keyboard "F10" >}}, your watched value shows *"Hello"* instead of *"SOPClassUID"*. +The *Evaluate Expression* tab allows you to modify variables during execution. In our example, you can set the result item.text(1) to something like item.setText(1, "Hello"). If you now step to the next line via {{< keyboard "F10" >}}, your watched value shows *"Hello"* instead of *"SOPClassUID"*. {{< imagegallery 2 "images/tutorials/basicmechanics" "Debug9" "Debug9a" >}} diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythonpip.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythonpip.md index 30ddad0df..0cb25ab55 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythonpip.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythonpip.md @@ -15,14 +15,14 @@ menu: # Example 4: Installing Additional Python Packages Using the PythonPip Module ## Introduction -MeVisLab already comes with a lot of integrated third-party software tools ready to use. Nevertheless, it might be necessary to install additional Python packages for your specific needs. This example will walk you through the process of adding packages through usage of/using the `PythonPip` module. +MeVisLab already comes with a lot of integrated third-party software tools ready to use. Nevertheless, it might be necessary to install additional Python packages for your specific needs. This example will walk you through the process of adding packages by using the `PythonPip` module. -The `PythonPip` module allows to work with the Python package manager pip. It can be used to install Python packages into the site-packages of the MeVisLab Python installation. +The `PythonPip` module allows to work with the Python package manager *pip*. It can be used to install Python packages into the site-packages of the MeVisLab Python installation. It technically provides the full Python package ecosystem, though you will have to keep some things in mind to avoid your newly added packages to interfere with the existing ones that MeVisLab operates on: * Packages can contain C-Extensions (since we use the same MSVC compiler resp. same GCC settings as Python 3 itself), *but* you can only install packages that do not interfere with packages or DLLs that are already part of MeVisLab. This means that installing packages with C-Extensions might work in many circumstances, but is not guaranteed to work -* **All installed packages with C-Extensions are release only**, so you can only import them in a release MeVisLab (under Windows) +* **All installed packages with C-Extensions are release only**, so you can only import them in a release MeVisLab (on Windows) {{}} On Windows: Existing packages (e.g., *NumPy*) can only be upgraded if they haven't already been loaded by MeVisLab's Python. So please make sure to start with a *fresh* MeVisLab @@ -69,7 +69,7 @@ MeVisPython -m pip ... The commandline option does not provide the possibility to install into a specified user package. Third-party information and *.mli* files are not adapted automatically with the commandline tool. {{}} -In [Example 1: Installing PyTorch using the PythonPip module](tutorials/thirdparty/pytorch/pytorchexample1/) we are installing PyTorch to use it in MeVisLab scripting. +In [Example 1: Installing PyTorch Using the PythonPip Module](tutorials/thirdparty/pytorch/pytorchexample1/), we are installing PyTorch to use it in MeVisLab scripting. ## Summary * The `PythonPip` module allows to install additional Python packages to adapt MeVisLab to a certain extent. diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythonscripting.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythonscripting.md index c0c117915..7a7fc3039 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythonscripting.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythonscripting.md @@ -16,7 +16,6 @@ menu: # Example 2.5: Module Interactions Using Python Scripting {#TutorialPythonScripting} ## Introduction - This chapter will give you an overview over Python scripting in MeVisLab. Here, no introduction into Python will be given. However, basic knowledge in Python is helpful. Instead, we will show how to integrate and use Python in the MeVisLab SDK. In fact, nearly everything in MeVisLab can be done via Python scripting: You can add modules to your network, or remove modules, you can dynamically establish and remove connections, and so on. But, much more important: You can access module inputs and outputs, as well as module fields to process their parameters and data. You can equip user interfaces and panel with custom functionalities. Python can be used to implement module interactions. When you open a panel or you press a button in a panel, the executed actions are implemented via Python scripting. @@ -37,7 +36,7 @@ In the *Scripting Console*, you can add and connect modules using the following * *ctx.addModule("*< ModuleName >*")* : Add the desired module to your workspace. * *ctx.field("* < ModuleName.FieldName> *")* : Access a field of a module. -* *ctx.field("* < ModuleInput > *").connectFrom("* < ModuleOutput > *")* : Draw a connection from one module's output to another module's input. +* *ctx.field("* < ModuleInput > *").connectFrom("* < ModuleOutput > *")* : Establish a connection from one module's output to another module's input. In this case, we added the modules `DicomImport` and `View2D` to the workspace and connected both modules. @@ -45,7 +44,7 @@ In this case, we added the modules `DicomImport` and `View2D` to the workspace a It is also possible to add notes to your workspace. -![Add a note to the workspace](images/tutorials/basicmechanics/Scripting_04.png "Add a note to your workspace") +![Add a note to your workspace](images/tutorials/basicmechanics/Scripting_04.png "Add a note to your workspace") ### Access Modules and Module Fields You can access modules via *ctx.module("* < ModuleName > *")*. From this object, you can access module fields, module inputs and outputs, and everything in context of this module. @@ -57,15 +56,14 @@ You can also directly access a module field via *ctx.field("* < ModuleName.Field ![Access modules and module fields](images/tutorials/basicmechanics/Scripting_05.png "Access modules and module fields") ### Python Scripting Reference -{{< docuLinks "/Resources/Documentation/Publish/SDK/ScriptingReference/group__scripting.html" "Here" >}} you can find the Scripting Reference. In the Scripting Reference you can find information about different Python classes used in MeVisLab and their methods. +{{< docuLinks "/Resources/Documentation/Publish/SDK/ScriptingReference/group__scripting.html" "Here" >}} you can find the Scripting Reference. In the Scripting Reference, you can find information about different Python classes used in MeVisLab and their methods. [//]: <> (MVL-653) ## Where and How to Use Python Scripting #### Scripting View - -Under {{< menuitem "View" "Views" "Scripting" >}} you can find the View *Scripting*. The view offers a standard Python console, without any meaningful network or module context. This means only general Python functionalities can be tested and used. Access to modules or your network is not possible. +Under {{< menuitem "View" "Views" "Scripting" >}} you can find the View *Scripting*. The view offers a standard Python console without any meaningful network or module context. This means only general Python functionalities can be tested and used. Access to modules or your network is not possible. #### Scripting Console You can open the *Scripting Console* via {{< menuitem "Scripting" "Show Scripting Console" >}}. In the context of your workspace, you can access your network and modules. @@ -74,10 +72,10 @@ You can open the *Scripting Console* via {{< menuitem "Scripting" "Show Scriptin Every module offers a scripting console. Open the context menu of a module and select {{< menuitem "Show Window" "Scripting Console" >}}. You can work in the context (*ctx.*) of this module. #### Module `RunPythonScript` -The module `RunPythonScript` allows to execute Python scripts from within a MeVisLab network. You can draw parameter connection from modules to `RunPythonScript` and back, to process parameter fields using Python scripting. An example for the usage of `RunPythonScript` can be found [here](../scriptingexample1/). +The module `RunPythonScript` allows to execute Python scripts from within a MeVisLab network. You can establish parameter connection from modules to `RunPythonScript` and back to process parameter fields using Python scripting. An example for the usage of `RunPythonScript` can be found [here](../scriptingexample1/). #### Module Interactions via Python Scripting -You can reference to a Python function in a *.script* file of a macro module. With this, you can, for example, execute a Python function whenever you open a panel, or define the action that is executed when pressing a button or specify the command triggered by a [field listener](tutorials/basicmechanisms/macromodules/scriptingexample2). An example for module interactions via Python scripting is given in the same example. +You can reference to a Python function in a *.script* file of a macro module. With this, you can, for example, execute a Python function whenever you open a panel, define the action that is executed when pressing a button, or specify the command triggered by a [field listener](tutorials/basicmechanisms/macromodules/scriptingexample2). An example for module interactions via Python scripting is given in the same example. #### Python Scripting in Network Files (*.mlab*) If you do not want to create a macro module, you can also execute Python scripts in a network file (*.mlab*). Save your network using a defined name, for example, *mytest.mlab*. Then, create a *.script* and a *.py* file in the same directory, using the same names (*mytest.script* and *mytest.py*). @@ -103,14 +101,14 @@ print("Hello") ``` {{}} -If you now use the menu item {{< menuitem "Scripting" "Start Network Script" >}}, the script can be executed inside your network. You can also use the keyboard shortcut {{< keyboard "ctrl+R" >}}. +If you now use the menu item {{< menuitem "Scripting" "Start Network Script" >}}, the script can be executed inside your network. You can also use the keyboard shortcut {{< keyboard "Ctrl" "R" >}}. ## Tips and Tricks -#### Scripting Assistant -Under {{< menuitem "View" "Views" "Scripting Assistant" >}} you can find the view *Scripting Assistant*. In this view, the actions you execute in the workspace are translated into Python script. +#### Scripting Assistant +Under {{< menuitem "View" "Views" "Scripting Assistant" >}} you can find the view Scripting Assistant. In this view, the actions you execute in the workspace are translated into Python script. -For example: Open the *Scripting Assistant*. Add the module `WEMInitialize` to your workspace. You can select a Model, for example, the cube. In addition, you can change the Translation and press *Apply*. All these actions can be seen in the *Scripting Assistant* translated into Python code. Therefore, the *Scripting Assistant* is a powerful tool to help you to script you actions. +For example: Open the Scripting Assistant. Add the module `WEMInitialize` to your workspace. You can select a Model, for example, the cube. In addition, you can change the Translation and press *Apply*. All these actions can be seen in the Scripting Assistant translated into Python code. Therefore, the Scripting Assistant is a powerful tool to help you to script you actions. ![Scripting Assistant](images/tutorials/basicmechanics/Scripting_01.png "Scripting Assistant") diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/scriptingexample1.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/scriptingexample1.md index b9d45e868..a16fbcc26 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/scriptingexample1.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/scriptingexample1.md @@ -18,20 +18,21 @@ menu: {{< youtube "O5Get1PMOq8" >}} ## Introduction -The module `RunPythonScript` allows to execute Python scripts from within a MeVisLab network. You can draw parameter connection from modules to `RunPythonScript` and back to process parameter fields using Python scripting. +The module `RunPythonScript` allows to execute Python scripts from within a MeVisLab network. You can establish parameter connection from modules to `RunPythonScript` and back to process parameter fields using Python scripting. ## Steps to Do ### Develop Your Network In this example, we like to dynamically change the color of a cube in an Open Inventor scene. For that, add and connect the following modules as shown. -![RunPythonScript Example](images/tutorials/basicmechanics/Scripting_06.png "RunPythonScript") +![RunPythonScript example](images/tutorials/basicmechanics/Scripting_06.png "RunPythonScript example") ### Scripting Using the Module `RunPythonScript` Open the panel of `RunPythonScript`. There is an option to display input and output fields. For that, tick the box *Fields* on the top left side of the panel. -You can also name these fields individually by ticking the box *Edit field titles*. Call the first input field TimeCounter and draw a parameter connection from the field Value of the panel of `TimeCounter` to the input field TimeCounter of the module `RunPythonScript`. -We can name the first output field DiffuseColor and draw a parameter connection from this field to the field Diffuse Color in the panel of the module `SoMaterial`. +You can also name these fields individually by ticking the box *Edit field titles*. Call the first input field TimeCounter and establish a parameter connection from the field Value of the panel of `TimeCounter` to the input field TimeCounter of the module `RunPythonScript`. + +We can name the first output field DiffuseColor and establish a parameter connection from this field to the field Diffuse Color in the panel of the module `SoMaterial`. ![TimeCounter](images/tutorials/basicmechanics/Scripting_07.png "TimeCounter") @@ -57,4 +58,4 @@ You can now see a color change in the viewer `SoExaminerViewer` every time the ` ## Summary * The module `RunPythonScript` can be used to process module fields in your network using Python scripting. -* Use the methods *updateOutputValue(name, value)* or *setOutputValue(name, value)* to update output fields of `RunPythonScript`. \ No newline at end of file +* Use the methods updateOutputValue(name, value) or setOutputValue(name, value) to update output fields of `RunPythonScript`. \ No newline at end of file diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/scriptingexample2.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/scriptingexample2.md index 8877b139f..f7799ba36 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/scriptingexample2.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/scriptingexample2.md @@ -13,7 +13,7 @@ menu: parent: "macro_modules" --- -# Example 2.5.2: Module Interactions Via Python Scripting +# Example 2.5.2: Module Interactions via Python Scripting {{< youtube "hGq6vA7Ll9Q" >}} @@ -32,7 +32,7 @@ Now, you have to edit: 3. Directory Structure: Change to *Self-contained* (this setting is only available in MeVisLab versions before 5.0.0, later versions always use *self-contained*) 4. Project: Select your project name -Press *Next* and edit the following: +Click Next > and edit the following: 1. Copy existing network: Select the example network 2. Check the box: Add Python file @@ -82,7 +82,7 @@ Window { ``` {{}} -![Panel with Tabs and Viewers](images/tutorials/basicmechanics/PanelWithTabsAndViewers.png "Panel with Tabs and Viewers") +![Panel with tabs and viewers](images/tutorials/basicmechanics/PanelWithTabsAndViewers.png "Panel with tabs and viewers") ### Edit Viewer Settings in the Panel You may want to change the design setting of the right viewer. This is still possible via the internal network of the macro module. Open the internal network either via the context menu or using the middle mouse button {{< mousebutton "middle" >}} and click on the module. After that, open the automatic panel of the module `SoExaminerViewer` via context menu {{< menuitem "Show Windows" "Automatic Panel" >}} and change the field decoration to *False*. Keep in mind, as we did not create CSOs by now, the right viewer stays black. @@ -92,7 +92,7 @@ You may want to change the design setting of the right viewer. This is still pos ![Changed viewer settings](images/tutorials/basicmechanics/ChangedViewerSettings.png "Changed viewer settings") ### Selection of Images -Next, we like to add the option to browse through the folders and select the image, we like to create CSOs from. This functionality is already given in the internal network in the module `LocalImage`. We can copy this functionality from `LocalImage` and add this option to the panel above both viewers. But, how should we know, which field name we reference to? To find this out, open the internal network of your macro module. Now you are able to open the panel of the module `LocalImage`. Right-click {{< mousebutton "right" >}} the desired field: In this case, right-click the label Name. Select {{< menuitem "Copy Name" >}}, to copy the internal name of this field. +Next, we like to add the option to browse through the folders and select the image we like to create CSOs from. This functionality is already given in the internal network in the module `LocalImage`. We can copy this functionality from `LocalImage` and add this option to the panel above both viewers. But how should we know which field name we reference to? To find this out, open the internal network of your macro module. Now, you are able to open the panel of the module `LocalImage`. Right-click {{< mousebutton "right" >}} the desired field: In this case, right-click {{< mousebutton "right" >}} the label Name. Select {{< menuitem "Copy Name" >}} to copy the internal name of this field. ![Copy the field name](images/tutorials/basicmechanics/GUI_Exp_09.png "Copy the field name") @@ -142,7 +142,7 @@ To create the *Browse\...* button: To create the Iso Generator Button: -We like to copy the field of the *Update* button from the internal module `IsoCSOGenerator`, but not its layout so: +We like to copy the field of the *Update* button from the internal module `IsoCSOGenerator` but not its layout, so: 1. Create a new Field in the interface, called IsoGenerator, which contains the internal field apply from the module `CSOIsoGenerator`. 2. Create a new Button in your Window that uses the field IsoGenerator. @@ -215,20 +215,20 @@ def fileDialog(): ``` {{}} -![Automatically generate CSOs based on Iso value](images/tutorials/basicmechanics/GUI_Exp_14.png "Automatically generate CSOs based on Iso value") +![Automatically generate CSOs based on an isovalue](images/tutorials/basicmechanics/GUI_Exp_14.png "Automatically generate CSOs based on an isovalue") ### Colorizing CSOs -We like to colorize the CSO we hover over with our mouse in the 2D viewer. Additionally, when clicking a CSO with the left mouse button {{< mousebutton "left" >}}, this CSO shall be colorized in the 3D viewer. This functionality can be implemented via Python scripting (even though MeVisLab has a build-in function to do that). We can do this in the following way: +We like to colorize the CSO we hover over with our mouse in the 2D viewer. Additionally, when clicking a CSO with the left mouse button {{< mousebutton "left" >}}, this CSO shall be colorized in the 3D viewer. This functionality can be implemented via Python scripting (even though MeVisLab has a built-in function to do that). We can do this in the following way: -1. Enable the View *Scripting Assistant*, which translates actions into Python code. +1. Enable the View Scripting Assistant, which translates actions into Python code. ![Scripting Assistant](images/tutorials/basicmechanics/GUI_Exp_15.png "Scripting Assistant") -2. Enable a functionality that allows us to notice the ID of the CSO we are currently hovering over with our mouse. For this, open the internal network of our macro module. We will use the module `SoView2DCSOExtensibleEditor`. Open its panel and select the tab *Advanced*. You can check a box to enable Update CSO id under mouse. If you now hover over a CSO, you can see its ID in the panel. We can save the internal network to save this functionality, but we can also solve our problem via scripting. The Scripting Assistant translated our action into code that we can use. +2. Enable a functionality that allows us to identify the ID of the CSO we are currently hovering over with our mouse. For this, open the internal network of our macro module. We will use the module `SoView2DCSOExtensibleEditor`. Open its panel and select the tab *Advanced*. You can check a box to enable Update CSO id under mouse. If you now hover over a CSO, you can see its ID in the panel. We can save the internal network to save this functionality, but we can also solve our problem via scripting. The Scripting Assistant translated our action into code that we can use. - ![Enabling CSO id identification](images/tutorials/basicmechanics/GUI_Exp_16.png "Enabling CSO id identification") + ![Enabling CSO ID identification](images/tutorials/basicmechanics/GUI_Exp_16.png "Enabling CSO ID identification") - We like to activate this functionality when opening the panel of our macro module `IsoCSOs`. Thus, we add a starting command to the control Window. We can call this command, for example, *enableFunctionalities*. + We like to activate this functionality when opening the panel of our macro module `IsoCSOs`. For this, we add a starting command to the control Window. We can call this command, for example, *enableFunctionalities*. In the *.script* file: @@ -247,7 +247,7 @@ Window { ``` {{}} -In the Python file, we define the function *enableFunctionalities*. We see our action as Python code in the *Scripting Assistant*. Just copy the code into our Python function. +In the Python file, we define the function *enableFunctionalities*. We see our action as Python code in the Scripting Assistant Just copy the code into our Python function. {{< highlight filename="IsoCSOs.py" >}} ```Python @@ -271,7 +271,7 @@ Commands { ``` {{}} -In the Python file: +In the *.py* file: {{< highlight filename="IsoCSOs.py" >}} ```Python @@ -323,6 +323,6 @@ TabViewItem Settings { * The control *Button* creates a button executing a Python function when pressed. * The tag *WindowActivationCommand* of the control Window triggers Python functions executed when opening the panel. * Field listeners can be used to activate Python functions triggered by a change of defined parameter fields. -* Use the view *Scripting Assistant* to translate actions into Python code. +* Use the view Scripting Assistant to translate actions into Python code. {{< networkfile "examples/basic_mechanisms/macro_modules_and_module_interaction/example2/ScriptingExample2.zip" >}} diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/soviewportregion.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/soviewportregion.md index 0cd4a1a5a..781c298e9 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/soviewportregion.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/soviewportregion.md @@ -8,12 +8,12 @@ tags: ["Beginner", "Tutorial", "SoViewportRegion", "Layout", "Multi-View"] menu: main: identifier: "soviewportregion" - title: "Creating Multi View Layouts Using SoViewportRegion" + title: "Creating Multi-View Layouts Using SoViewportRegion" weight: 460 parent: "basicmechanisms" --- -# Example 6: Creating Multi View Layouts Using SoViewportRegion +# Example 6: Creating Multi-View Layouts Using SoViewportRegion ## Introduction In this guide, we will show how to use the `SoViewportRegion` module to create custom layouts within the `SoRenderArea` module. This allows you to display multiple views or slices in a single window. @@ -28,17 +28,17 @@ We will demonstrate how to: ### Displaying Three Images in One Panel Add an `ImageLoad` module to your workspace and select a 3D image like *./MeVisLab/Resources/DemoData/MRI_Head.tif* from the MeVisLab demo data directory. Connect an `OrthoReformat3` module and add three `View2D` modules. -![Image Display Setup](images/tutorials/basicmechanics/E6_1.png "Image Display Setup") +![Image display setup](images/tutorials/basicmechanics/E6_1.png "Image display setup") Opening the three `View2D` module panels now shows the image data in three orthogonal views. The module `OrthoReformat3` transforms the input image (by rotating and/or flipping) into the three main views commonly used. -![3 Views in 3 Viewers](images/tutorials/basicmechanics/E6_2.png "3 Views in 3 Viewers") +![Three views in three viewers](images/tutorials/basicmechanics/E6_2.png "Three views in three viewers") The module `SoViewportRegion` divides the render window into multiple areas, allowing different views or slices to be shown in the same window. It's useful in medical applications, like displaying MRI or CT images from different angles (axial, sagittal, coronal) at once, making data analysis easier and faster. Add three `SoViewportRegion` modules and connect each one to a `View2D` module. To display the hidden outputs of the `View2D` module, press {{< keyboard "SPACE" >}} and connect the output to the input of `SoViewportRegion` as shown below. -![Connect SoViewportRegion with View2D](images/tutorials/basicmechanics/E6_3.png "Connect SoViewportRegion with View2D") +![Connect SoViewportRegion to View2D](images/tutorials/basicmechanics/E6_3.png "Connect SoViewportRegion to View2D") Add a `SoRenderArea` for your final result to the network and connect all three `SoViewportRegion` modules to it. @@ -63,7 +63,7 @@ We want to create a layout with the following setting: * Coronal view on the top right side * Sagittal view on the bottom right side -![Target Layout](images/tutorials/basicmechanics/E6_6.png "Target Layout") +![Target layout](images/tutorials/basicmechanics/E6_6.png "Target layout") Now, open the left `SoViewportRegion` module and change settings: @@ -78,7 +78,7 @@ Now, open the left `SoViewportRegion` module and change settings: * *Domain* Fraction of height * *Reference* Upper window border -![Axial View](images/tutorials/basicmechanics/E6_7.png "Axial View") +![Axial view](images/tutorials/basicmechanics/E6_7.png "Axial view") Continue with the middle `SoViewportRegion` module and change settings: @@ -93,7 +93,7 @@ Continue with the middle `SoViewportRegion` module and change settings: * *Domain* Fraction of smallest dimension * *Reference* Upper window border -![Coronal View](images/tutorials/basicmechanics/E6_8.png "Coronal View") +![Coronal view](images/tutorials/basicmechanics/E6_8.png "Coronal view") The right `SoViewportRegion` module should look as follows: @@ -108,16 +108,16 @@ The right `SoViewportRegion` module should look as follows: * *Domain* Fraction of smallest dimension * *Reference* Upper window border -![Sagittal View](images/tutorials/basicmechanics/E6_9.png "Sagittal View") +![Sagittal view](images/tutorials/basicmechanics/E6_9.png "Sagittal view") #### Displaying Four Images in One Panel In the next example, the `SoRenderArea` will display four views at the same time: axial, coronal, sagittal, and a 3D view. -![3D View Layout](images/tutorials/basicmechanics/E6_11.png "3D View Layout") +![3D view layout](images/tutorials/basicmechanics/E6_11.png "3D view layout") These views will be arranged in a single panel that is split into two sides with each side showing two images. To add the 3D view, insert a `View3D` module and connect it to the `ImageLoad` module. Then, connect the `View3D` to `SoCameraInteraction`, connect that to another `SoViewportRegion`, and finally to `SoRenderArea`. -![3D View Network](images/tutorials/basicmechanics/E6_10.png "3D View Network") +![3D view network](images/tutorials/basicmechanics/E6_10.png "3D view network") Now, open the left `SoViewportRegion` module and change settings: @@ -147,11 +147,11 @@ Open the right `SoViewportRegion` connected to the `SoCameraInteraction` module This setup will let you interact with the 3D view and display all four views together as shown in the figure below. -![3D View](images/tutorials/basicmechanics/E6_12.png "3D View") +![3D view](images/tutorials/basicmechanics/E6_12.png "3D view") You will see that the orientation cube of the 3D viewer appears in the bottom right corner of the `SoRenderArea`. To resolve this, you can check *Render delayed paths* in the `SoViewportRegion` module of the 3D viewer. -![Final Network](images/tutorials/basicmechanics/E6_13.png "Final Network") +![Final network](images/tutorials/basicmechanics/E6_13.png "Final network") ## Alternatively Using `SoView2D` In the case you want the same dataset to be visualized in multiple viewers, the module `SoView2D` already provides this functionality. @@ -160,11 +160,11 @@ In the case you want the same dataset to be visualized in multiple viewers, the Whenever you are using the `SoView2D` module to visualize a 2D dataset, you need to add a `View2DExtensions` module and, for example, a `SoRenderArea` module. Without the `View2DExtensions` module, interactions like scrolling through slices or changing the window and level settings will not be possible. -By default, you will see your images in a single viewer the same way as if you use the `View2D` module. The *number of columns* is defined as *1* by default. If you now change the *Number of Slices* to something like *3*, you will see three viewers shown in a single column. As we can only connect one dataset, this network cannot display multiple series at the same time. +By default, you will see your images in a single viewer the same way as if you use the `View2D` module. The number of Columns is defined as *1* by default. If you now change the Number of Slices to something like *3*, you will see three viewers shown in a single column. As we can only connect one dataset, this network cannot display multiple series at the same time. ![Multiple slices in SoView2D](images/tutorials/basicmechanics/SoView2D_2.png "Multiple slices in SoView2D") -Changing the *number of columns* to *3* and the *Number of Slices* to *9* results in a 3x3 layout. +Changing the number of Columns to *3* and the Number of Slices to *9* results in a 3x3 layout. ![Multiple slices and columns in SoView2D](images/tutorials/basicmechanics/SoView2D_3.png "Multiple slices and columns in SoView2D") diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/viewerexample.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/viewerexample.md index 79bb24d0e..81dddf0e5 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/viewerexample.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/viewerexample.md @@ -38,17 +38,17 @@ Opening your viewers should now show the images in 2D and 3D. Now, save your network as *.mlab* file and remember the location. ### Create a Macro Module -Open the Project Wizard via {{< menuitem "File" "Run Project Wizard" >}} and run the Wizard for a *macro module*. Name your module *MyViewerApplication*, enter your details, and click *Next >*. +Open the Project Wizard via {{< menuitem "File" "Run Project Wizard" >}} and run the Wizard for a *macro module*. Name your module *MyViewerApplication*, enter your details, and click Next >. -![Module Properties](images/tutorials/basicmechanics/SimpleApp_03.png "Module Properties") +![Module properties](images/tutorials/basicmechanics/SimpleApp_03.png "Module properties") -On the next screen, make sure to add a Python file and use the existing network you previously saved. Click *Next >*. +On the next screen, make sure to add a Python file and use the existing network you previously saved. Click Next >. -![Macro module Properties](images/tutorials/basicmechanics/SimpleApp_04.png "Macro module Properties") +![Macro module properties](images/tutorials/basicmechanics/SimpleApp_04.png "Macro module properties") -You can leave all fields empty for now and just click *Create*. +You can leave all fields empty for now and just click Create. -![Module Field Interface](images/tutorials/basicmechanics/SimpleApp_05.png "Module Field Interface") +![Module field interface](images/tutorials/basicmechanics/SimpleApp_05.png "Module field interface") MeVisLab reloads its internal database and you can open a new tab. Search for your newly created module, in our case it was *MyViewerApplication*. @@ -59,9 +59,9 @@ In the case you double-click {{< mousebutton "left" >}} your module now, you wil ### Develop Your User Interface Before adding your own UI, open the internal network of your macro module via right-click {{< mousebutton "right" >}} and {{< menuitem "Related Files" "MyViewerApplication.mlab" >}}. Open the panel of your `ImageLoad` module and set *filename* to an empty string (clear). This is necessary for later. -Now, right-click on your *MyViewerApplication* and select {{< menuitem "Related Files" "MyViewerApplication.script" >}} +Now, right-click {{< mousebutton "right" >}} on your *MyViewerApplication* and select {{< menuitem "Related Files" "MyViewerApplication.script" >}} -{{< docuLinks "/Resources/Documentation/Publish/SDK/MeVisLabManual/ch26.html" "MATE">}} opens showing your script file. You already learned how to create simple UI elements in [Example 2.4](tutorials/basicmechanisms/macromodules/guidesign). Now, we will create a little more complex UI including your `View2D` and `View3D`. +{{< docuLinks "/Resources/Documentation/Publish/SDK/MeVisLabManual/ch26.html" "MATE">}} opens showing your *.script* file. You already learned how to create simple UI elements in [Example 2.4](tutorials/basicmechanisms/macromodules/guidesign). Now, we will create a little more complex UI including your `View2D` and `View3D`. First we need a new *Field* in your *Parameters* section. Name the field filepath and set internalName to ImageLoad.filename. @@ -146,10 +146,10 @@ Window { We have a vertical layout having two items placed horizontally next to each other. The new *Button* gets the title *Reset* but does nothing yet, because we did not add a Python function to a command. -Additionally, we added the `View2D` and the `View3D` to our *Window* and defined the height, width, and the expandX/Y property to *yes*. This leads our viewers to resize together with our *Window*. +Additionally, we added the `View2D` and the `View3D` to our *Window* and defined the height, width, and the expandX/Y property to *Yes*. This leads our viewers to resize together with our *Window*. {{}} -Additional information about the `View2D` and `View3D` options can be found in the MeVisLab {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#mdl_Viewer" "MDL Reference">}} +Additional information about the `View2D` and `View3D` options can be found in the MeVisLab {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#mdl_Viewer" "MDL Reference" >}} {{}} You can now play around with your module in MeVisLab SDK. Open the *Window* and select a file. You can see the two viewers showing the 2D and 3D images. You can interact with your viewers the same way as in your MeVisLab network. All functionalities are taken from the modules and transferred to your user interface. @@ -157,7 +157,7 @@ You can now play around with your module in MeVisLab SDK. Open the *Window* and ![2D and 3D viewers in our application](images/tutorials/basicmechanics/SimpleApp_09.png "2D and 3D viewers in our application") ### Develop a Python Function for Your Button -Next, we want to reset the filepath to an empty string on clicking our *Reset* button. Add the *reset* command to your Button. +Next, we want to reset the filepath to an empty string on clicking our Reset button. Add the *reset* command to your Button. {{< highlight filename="MyViewerApplication.script" >}} ``` Stan ... @@ -200,7 +200,7 @@ Commands { ``` {{}} -In the above example, we react on changes of the field startSlice of the module `View2D`. Whenever the field value (currently displayed slice) changes, the Python function *printCurrentSliceNumber* is executed. +In the above example, we react on changes of the field startSlice of the module `View2D`. Whenever the field value (currently displayed slice) changes, the Python function printCurrentSliceNumber is executed. In your Python file `MyViewerApplication.py`, you can now add the following: diff --git a/mevislab.github.io/content/tutorials/dataobjects.md b/mevislab.github.io/content/tutorials/dataobjects.md index b5969568e..7425eae78 100644 --- a/mevislab.github.io/content/tutorials/dataobjects.md +++ b/mevislab.github.io/content/tutorials/dataobjects.md @@ -24,4 +24,4 @@ are used to mark specific locations or aspects of an image and allow to process * [Curves](tutorials/dataobjects/curves)
can print the results of a function as two-dimensional mathematical graphs into a diagram. -Usage, advantages, and disadvantages of each above-mentioned data object type will be covered in the following specified chapters, where you will be building example networks for some of the most common use cases. +Usage, advantages, and disadvantages of each above-mentioned data object type will be covered in the following chapters, where you will build example networks for some of the most common use cases. diff --git a/mevislab.github.io/content/tutorials/dataobjects/contourobjects.md b/mevislab.github.io/content/tutorials/dataobjects/contourobjects.md index 5e11d5cdd..9b5e9c1c4 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/contourobjects.md +++ b/mevislab.github.io/content/tutorials/dataobjects/contourobjects.md @@ -30,7 +30,7 @@ The *Path Points* form the connection between the *Seed Points* whereby contour In general, the *Seed Points* are created interactively using an editor module and the *Path Points* are generated automatically by interpolation or other algorithms. -![Contour Segmented Object (CSO)](images/tutorials/dataobjects/contours/CSO_Expl_01.png "Contour Segmented Object (CSO)") +![Contour Segmentation Object (CSO)](images/tutorials/dataobjects/contours/CSO_Expl_01.png "Contour Segmentation Object (CSO)") #### CSO Editors {#CSOEditors} As mentioned, when creating CSOs, you can do this interactively by using an editor. @@ -43,21 +43,21 @@ The following images show editors available in MeVisLab for drawing CSOs: The `SoCSOIsoEditor` and `SoCSOLiveWireEditor` are special, because they are using an algorithm to detect edges themselves. * The `SoCSOIsoEditor` generates isocontours interactively. -* The `SoCSOLiveWireEditor` renders and semi-interactively generates CSOs based on the LiveWire algorithm. +* The `SoCSOLiveWireEditor` renders and semi-interactively generates CSOs based on the [LiveWire](https://en.wikipedia.org/wiki/Livewire_Segmentation_Technique) algorithm. {{
}} ### CSO Lists and CSO Groups All created CSOs are stored in CSO lists that can be saved and loaded on demand. The lists cannot only store the coordinates of the CSOs, but also additional information in the form of name-value pairs (using specialized modules or Python scripting). -![Basic CSO Network](images/tutorials/dataobjects/contours/BasicCSONetwork.png "Basic CSO Network") +![Basic CSO network](images/tutorials/dataobjects/contours/BasicCSONetwork.png "Basic CSO network") Each `SoCSO*Editor` requires a `SoView2DCSOExtensibleEditor` that manages attached CSO editors and renderers and offers an optional default renderer for all types of CSOs. In addition to that, the list of CSOs needs to be stored in a `CSOManager`. The appearance of the CSO can be defined by using a `SoCSOVisualizationSettings` module. -CSOs can also be grouped together. The following image shows two different CSO groups. Groups can be used to organize CSOs, in this case to distinguish the CSOs of the right and the left lung. [Here](tutorials/dataobjects/contours/contourexample2/) you can find more information about CSO Groups. +CSOs can also be grouped together. The following image shows two different CSOGroups. Groups can be used to organize CSOs, in this case to distinguish the CSOs of the right and the left lung. [Here](tutorials/dataobjects/contours/contourexample2/) you can find more information about CSOGroups. -![CSO Groups](images/tutorials/dataobjects/contours/DO2_11_2.png "CSO Groups") +![CSOGroups are used to color each lung differently](images/tutorials/dataobjects/contours/DO2_11_2.png "CSOGroups are used to color each lung differently") {{}} For more information, see {{< docuLinks "/Standard/Documentation/Publish/Overviews/CSOOverview.html" "CSO Overview" >}} diff --git a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample1.md b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample1.md index 8b235e177..3536e6c04 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample1.md +++ b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample1.md @@ -18,19 +18,19 @@ menu: {{< youtube "ygYJMmQ95v8">}} ## Introduction -We like to start with the creation of CSOs. To create CSOs, you need a `SoCSO*`-Editor. There are several different editors that can be used to create CSOs (see [here](tutorials/dataobjects/contourobjects#CSOEditors)). Some of them are introduced in this example. +We like to start with the creation of CSOs. To create CSOs, you need a `SoCSO*Editor`. There are several different editors that can be used to create CSOs (see [here](tutorials/dataobjects/contourobjects#CSOEditors)). Some of them are introduced in this example. ## Steps to Do ### Develop Your Network For this example, we need the following modules. Add the modules to your workspace, connect them as shown below, and load the example image *$(DemoDataPath)/BrainMultiModal/ProbandT1.tif*. -![Data Objects Contours Example 1](images/tutorials/dataobjects/contours/DO1_01.png "Data Objects Contours Example 1") +![Simple network to create rectangle CSOs on a 2D image](images/tutorials/dataobjects/contours/DO1_01.png "Simple network to create rectangle CSOs on a 2D image") ### Edit Rectangular CSO Now, open the module `View2D`. Use your left mouse button {{< mousebutton "left" >}}, to draw a rectangle as your first CSO. -![Rectangle Contour](images/tutorials/dataobjects/contours/DO1_02.png "Rectangle Contour") +![Rectangle contour](images/tutorials/dataobjects/contours/DO1_02.png "Rectangle contour") The involved modules have the following tasks: @@ -42,25 +42,25 @@ The involved modules have the following tasks: If you now open the panel of the `CSOManager`, you will find one CSO, the one we created before. If you like, you can name the CSO. -![CSO Manager](images/tutorials/dataobjects/contours/DO1_04.png "CSO Manager") +![Panel of CSOManager](images/tutorials/dataobjects/contours/DO1_04.png "Panel of CSOManager") ### Change Properties of CSO Now, add the module `SoCSOVisualizationSettings` to your workspace and connect it as shown below. -![CSO Manager](images/tutorials/dataobjects/contours/DO1_05.png "CSO Manager") +![Added a SoCSOVisualizationSettings](images/tutorials/dataobjects/contours/DO1_05.png "Added a SoCSOVisualizationSettings") Open the module to change the visualization settings of your CSOs. In this case, we change the line style (to dashed lines) and the color (to -be red). Tick the *Auto apply* box at the bottom or press *Apply*. +be red). Tick the Auto apply checkbox at the bottom or press Apply. -![Visualization Settings](images/tutorials/dataobjects/contours/DO1_07.png "Visualization Settings") +![Panel of SoCSOVisualizationSettings](images/tutorials/dataobjects/contours/DO1_07.png "Panel of SoCSOVisualizationSettings") ### CSOs of Different Shapes Exchange the module `SoCSORectangleEditor` with another editor, for example, the `SoSCOPolygonEditor` or `SoCSOSplineEditor`. Other editors allow to draw CSOs of other shapes. For polygon-shaped CSOs or CSOs -consisting of splines, left-click on the image viewer to add new points -to form the CSO. Double-click to finish the CSO. +consisting of splines, left-click {{< mousebutton "left" >}} on the image viewer to add new points +to form the CSO. Double-click {{< mousebutton "left" >}} to finish the CSO. ![SoSCOPolygonEditor](images/tutorials/dataobjects/contours/DO1_08.png "SoSCOPolygonEditor") ![SoCSOSplineEditor](images/tutorials/dataobjects/contours/DO1_09.png "SoCSOSplineEditor") @@ -73,7 +73,7 @@ If you want to fill the shapes, you can simply add a `SoCSOFillingRenderer` modu Create CSOs with green color and ellipsoid shapes. ## Summary -* CSOs can be created using a SoCSO\*-Editor. +* CSOs can be created using a `SoCSO*Editor`. * CSOs of different shapes can be created. * A list of CSOs can be stored in the `CSOManager`. * Properties of CSOs can be changed using `SoCSOVisualizationSettings`. diff --git a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample2.md b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample2.md index 87451b88a..7bf584e82 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample2.md +++ b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample2.md @@ -18,8 +18,9 @@ menu: {{< youtube "l2ih_maKfSw">}} ## Introduction -In this example, we like to create CSOs using the **Live Wire -Algorithm**, which allows semiautomatic CSO creation. The algorithm +In this example, we like to create CSOs using the [**Live Wire +Algorithm**](https://en.wikipedia.org/wiki/Livewire_Segmentation_Technique), +which allows semiautomatic CSO creation. The algorithm uses edge detection to support the user creating CSOs. We also like to interpolate CSOs over slices. That means additional CSOs are @@ -30,28 +31,26 @@ As a last step, we will group together CSOs of the same anatomical unit. ## Steps to Do ### Develop Your Network and Create CSOs - In order to do that, create the shown network. You can use the network -from the previous example and exchange the `SoCSO`-Editor. In addition to +from the previous example and exchange the `SoCSO*Editor`. In addition to that, load the example image *$(DemoDataPath)/Thorax1_CT.small.tif* . Now, create some CSOs on different, not consecutive slices. Afterward, hover over the `CSOManager` and press the emerging *plus* symbol. This displays the amount of existing CSOs. -![Data Objects Contours Example 2](images/tutorials/dataobjects/contours/DO2_02.png "Data Objects Contours Example 2") +![Left lung has been segmented on four slices](images/tutorials/dataobjects/contours/DO2_02.png "Left lung has been segmented on four slices") ### Create CSO Interpolations We like to generate interpolated contours for existing CSOs. In order to do that, add the module `CSOSliceInterpolator` to your workspace and connect it as shown. -![Slice Interpolation](images/tutorials/dataobjects/contours/DO2_03.png "Slice Interpolation") +![Added a slice interpolator](images/tutorials/dataobjects/contours/DO2_03.png "Added a slice interpolator") -Open the panel of module `CSOSliceInterpolator` and change the *Group -Handling* and the *Mode* as shown. If you now press *Update*, interpolating -CSOs are created. +Open the panel of module `CSOSliceInterpolator` and change the Group +Handling and the Mode as shown. If you now press Update, interpolated CSOs are created. -![Slice Interpolation Settings](images/tutorials/dataobjects/contours/DO2_04_2.png "Slice Interpolation Settings")   +![Slice interpolator settings](images/tutorials/dataobjects/contours/DO2_04_2.png "Slice interpolator settings") You can see the interpolated CSOs are added to the `CSOManager`. If you now scroll through your slices, you can find the interpolated CSOs. @@ -63,23 +62,23 @@ displayed in white and interpolated CSOs are marked in yellow. ![Interpolated CSOs](images/tutorials/dataobjects/contours/DO2_06.png "Interpolated CSOs") ### Group CSOs -We like to segment both lobes of the lung. To distinguish the CSOs of both lungs, we like to group CSOs together, according to the lung they belong to. First, we like to group together all CSOs belonging to the lung we already segmented. In order to do this, open the `CSOManager`. Create a new Group and label that Group. We chose the label *Left Lung*. Now, mark the created Group and all CSOs you want to include into that group and press *Combine*. If you click on the Group, all CSOs belonging to this Group are marked with a star. +We like to segment both lobes of the lung. To distinguish the CSOs of both lungs, we like to group CSOs together, according to the lung they belong to. First, we like to group together all CSOs belonging to the lung we already segmented. In order to do this, open the `CSOManager`. Create a new CSOGroup and label that CSOGroup. We chose the label *Left Lung*. Now, mark the created CSOGroup and all CSOs you want to include into that group and press Combine. If you click {{< mousebutton "left" >}} on the CSOGroup, all CSOs belonging to this CSOGroup are marked with an asterisk. {{}} -Keep in mind, that the right lung might be displayed on the left side of the image and vice versa, depending on your view. +Keep in mind that the right lung might be displayed on the left side of the image, and vice versa, depending on your view. {{}} -![Creating CSO Groups](images/tutorials/dataobjects/contours/DO2_07.png "Creating CSO Groups") -![Creating CSO Groups](images/tutorials/dataobjects/contours/DO2_07_2.png "Creating CSO Groups") +![Creating CSOGroups: labeling](images/tutorials/dataobjects/contours/DO2_07.png "Creating CSOGroups: labeling") +![Creating CSOGroups: combining](images/tutorials/dataobjects/contours/DO2_07_2.png "Creating CSOGroups: combining") As a next step, segment the right lung by creating new CSOs. -![Creation of further CSOs](images/tutorials/dataobjects/contours/DO2_08.png "Creation of further CSOs") +![Creation of further CSOs for the right lung](images/tutorials/dataobjects/contours/DO2_08.png "Creation of further CSOs for the right lung") -Create a new Group for all CSOs of the right lung. We labeled this Group *Right Lung*. Again, mark the group and the CSOs you like to combine and press *Combine*. -![Grouping remaining CSOs](images/tutorials/dataobjects/contours/DO2_09.png "Grouping remaining CSOs") +Create a new CSOGroup for all CSOs of the right lung. We labeled this CSOGroup *Right Lung*. Again, mark the group and the CSOs you like to combine and press Combine. +![Grouping CSOs for the right lung](images/tutorials/dataobjects/contours/DO2_09.png "Grouping CSOs for the right lung") -To visually distinguish the CSOs of both groups, change the color of each group under {{< menuitem "Group" "Visuals" >}}. We changed the color of the *Left Lung* to be green and of the *Right Lung* to be orange for path and seed points. In addition, we increased the *Width* of the path points. -![Interpolated CSOs](images/tutorials/dataobjects/contours/DO2_10.png "Interpolated CSOs") +To visually distinguish the CSOs of both groups, change the color of each group under {{< menuitem "Group" "Visuals" >}}. We changed the color of the *Left Lung* to be green and of the *Right Lung* to be orange for path and seed points. In addition, we increased the Width of the path points. +![Setting visual parameters for CSOGroups](images/tutorials/dataobjects/contours/DO2_10.png "Setting visual parameters for CSOGroups") As a last step, we need to disconnect the module `SoCSOVisualizationSettings`, as this module overwrites the visualization settings we enabled for each group in the `CSOManager`. ![Interpolated CSOs](images/tutorials/dataobjects/contours/DO2_11.png "Interpolated CSOs") diff --git a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample3.md b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample3.md index 57342d785..03b40e339 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample3.md +++ b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample3.md @@ -21,33 +21,33 @@ menu: In this example, we'd like to use the created CSOs to display an overlay. This allows us to mark one of two lungs. In addition to that, we will display the whole segmented lobe of the lung in a 3D -image. +viewer. ## Steps to Do ### Develop Your Network Use the network from the [contour example 2](tutorials/dataobjects/contours/contourexample2) and add the modules `VoxelizeCSO`, `SoView2DOverlay` and `View2D` to your workspace. Connect the module as -shown. The module `VoxelizeCSO` allows to convert CSOs into voxel images. +shown. The module `VoxelizeCSO` allows to convert CSOs into a voxel image. -![Data Objects Contours Example 3](images/tutorials/dataobjects/contours/DO3_02.png "Data Objects Contours Example 3") +![Network for segmenting and viewing contours in 2D](images/tutorials/dataobjects/contours/DO3_02.png "Network for segmenting and viewing contours in 2D") -### Convert CSOs into Voxel Images +### Convert CSOs into a Voxel Image Update the module `VoxelizeCSOs` to create voxel masks based on your CSOs. The result can be seen in `View2D1`. -![Overlay](images/tutorials/dataobjects/contours/DO3_03.png "Overlay") +![Showing an overlay of the voxel mask in 2D](images/tutorials/dataobjects/contours/DO3_03.png "Showing an overlay of the voxel mask in 2D") Next, we like to inspect the marked lobe of the lung. This means we like to inspect the object that is built out of CSOs. In order to do that, add the `View3D` module. The 3D version of the lung can be seen in the viewer. -![Additional 3D Viewer](images/tutorials/dataobjects/contours/DO3_04.png "Additional 3D Viewer") -![Extracted Object](images/tutorials/dataobjects/contours/DO3_05.png "Extracted Object") +![Additional 3D viewer](images/tutorials/dataobjects/contours/DO3_04.png "Additional 3D viewer") +![Extracted object](images/tutorials/dataobjects/contours/DO3_05.png "Extracted object") ## Summary -* The module `VoxelizeCSO` converts CSOs to voxel images. -* Create an overlay out of voxel images using `SoView2DOverlay`. +* The module `VoxelizeCSO` converts CSOs to a voxel image. +* Create an overlay out of a voxel image using `SoView2DOverlay`. {{< networkfile "examples/data_objects/contours/example3/ContourExample3.mlab" >}} diff --git a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample4.md b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample4.md index b62f0e4c7..486560a18 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample4.md +++ b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample4.md @@ -18,7 +18,7 @@ menu: {{< youtube "bT2ZprYcuOU">}} ## Introduction -In this example we like to calculate the volume of our object, in this +In this example, we like to calculate the volume of our object, in this case, the part of the lung we have segmented. ## Steps to Do @@ -28,39 +28,39 @@ Add the modules `CalculateVolume` and `SoView2DAnnotation` to your workspace and connect both modules as shown. Update the module `CalculateVolume`, which directly shows the volume of our object. -![Data Objects Contours Example 4](images/tutorials/dataobjects/contours/DO4_01.png "Data Objects Contours Example 4") +![Network for segmenting and viewing contours in 2D and in 3D](images/tutorials/dataobjects/contours/DO4_01.png "Network for segmenting and viewing contours in 2D and in 3D") ### Display the Lung Volume in the Image We now like to display the volume in the image viewer. For this, open the panel of the modules `CalculateVolume` and `SoView2DAnnotation`. Open the tab *Input* in the panel of the module `SoView2DAnnotation`. Now, -establish a parameter connection between *Total Volume* calculated in -the module `CalculateVolume` and the *input00* of the module -`SoView2DAnnotation`. This connection projects the *Total Volume* to the +establish a parameter connection between Total Volume calculated in +the module `CalculateVolume` and the input00 of the module +`SoView2DAnnotation`. This connection projects the Total Volume to the input of `SoView2DAnnotation`. -![Display Volume](images/tutorials/dataobjects/contours/DO4_02.png "Display Volume") +![Display volume](images/tutorials/dataobjects/contours/DO4_02.png "Display volume") -Go back to the tab *General* to select the *Annotation Mode User*. A separate tab exists for +Go back to the tab *General* to select the Annotation Mode *User*. A separate tab exists for each annotation mode. -![Annotate Image](images/tutorials/dataobjects/contours/DO4_03_2.png "Annotate Image") +![Annotate image: settings](images/tutorials/dataobjects/contours/DO4_03_2.png "Annotate image: settings") We select the tab *User* that we like to work on. You can see four fields that display four areas of a viewer in which you can add information text to the image. -![Annotate Image 2](images/tutorials/dataobjects/contours/DO4_04.png "Annotate Image") +![Annotate image: user annotations](images/tutorials/dataobjects/contours/DO4_04.png "Annotate image: user annotations") In this example we only like to add the volume, so delete all present input and replace that by the shown text. Now, you can see that the volume is displayed in the image viewer. If this is not the case, switch the annotations of the viewer by pressing the keyboard shortcut {{< keyboard "A" >}}. -![Display Volume in Image](images/tutorials/dataobjects/contours/DO4_05.png "Display Volume in Image") +![Display volume in image](images/tutorials/dataobjects/contours/DO4_05.png "Display volume in image") ## Summary -* `CalculateVolume` can calculate the volume of a voxel image. +* `CalculateVolume` calculates the volume of a voxel image. * `SoView2DAnnotation` enables to manually change the annotation mode of a viewer. * Annotations shown in a `View2D` can be customized by using a `SoView2DAnnotation` module. diff --git a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample5.md b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample5.md index 73ed7b82f..4149ded06 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample5.md +++ b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample5.md @@ -27,18 +27,17 @@ Add the following modules to your workspace and connect them as shown. Load the example image *Bone.tiff*. ### Automatic Creation of CSOs Based on the Isovalue -Now, open the panel of `CSOIsoGenerator` to set the *Iso Value* to 1200. If you press *Update* in +Now, open the panel of `CSOIsoGenerator` to set the Iso Value to *1200*. If you press Update in the panel, you can see the creation of CSOs on each image slice when opening the module `View2D`. In addition to that, the number of CSOs is displayed in the `CSOManager`. The module -`CSOIsoGenerator` generates isocontours for each slice at a fixed isovalue. This means that closed CSOs are formed based on the detection of the -voxel value of 1200 on every slice. +`CSOIsoGenerator` generates isocontours for each slice at a fixed isovalue. This means that closed CSOs are formed based on the detection of the voxel value of *1200* on every slice. -![Data Objects Contours Example 5](images/tutorials/dataobjects/contours/DO5_02.png "Data Objects Contours Example 5") +![Automatically generated isocontours](images/tutorials/dataobjects/contours/DO5_02.png "Automatically generated isocontours") ### Ghosting Now, we like to make CSOs of previous and subsequent slices visible (ghosting). In order to do that, open the panel of `SoCSOVisualizationSettings` and -open the tab *Misc*. Increase the parameter `Ghosting depth in voxel`, +open the tab *Misc*. Increase the parameter Ghosting Depth In Voxel, which shows you the number of slices above and below the current slice in which CSOs are also seen in the viewer. The result can be seen in the viewer. @@ -51,7 +50,7 @@ add the modules `SoCSO3DRenderer` and `SoExaminerViewer` to your network and connect them as shown. In the viewer `SoExaminerViewer`, you can see all CSOs together. In this case all scanned bones can be seen. -![CSOs in 3D View](images/tutorials/dataobjects/contours/DO5_05.png "CSOs in 3D View") +![CSOs in a 3D viewer](images/tutorials/dataobjects/contours/DO5_05.png "CSOs in a 3D viewer") ## Summary * `CSOIsoGenerator` enables automatic CSO generation based on an isovalue. @@ -59,4 +58,5 @@ all CSOs together. In this case all scanned bones can be seen. {{< networkfile "examples/data_objects/contours/example5/ContourExample5.mlab" >}} - [//]: <> (MVL-682) \ No newline at end of file + [//]: <> (MVL-682) + \ No newline at end of file diff --git a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample6.md b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample6.md index 8bdb4b7bd..3cded32ca 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample6.md +++ b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample6.md @@ -18,18 +18,18 @@ menu: {{< youtube "-ACAoeK2Fm8">}} ## Introduction -In this example, we are adding a label to a contour. The label provides information about measurements and about the contour itself. The label remains connected to the contour and can be moved via mouse interactions. +In this example, we are adding a label to a contour. The label provides information about measurements and about the contour itself. The label remains connected to the contour and can be moved via mouse interactions {{< mousebutton "left" >}}. ## Steps to Do ### Develop Your Network -Add the modules `LocalImage` and `View2D` to your workspace and connect them as shown below. Load the file *ProbandT1.dcm* from MeVisLab demo data. In order to create contours (CSOs), we need a `SoView2DCSOExtensibleEditor` module. It manages attached CSO editors, renderers and offers an optional default renderer for all types of CSOs. +Add the modules `LocalImage` and `View2D` to your workspace and connect them as shown below. Load the file *ProbandT1.dcm* from MeVisLab demo data. In order to create contours (CSOs), we need a `SoView2DCSOExtensibleEditor` module. It manages attached CSO editors, renderers, and offers an optional default renderer for all types of CSOs. The first CSO we want to create is a distance line. Add a `SoCSODistanceLineEditor` to the `SoView2DCSOExtensibleEditor`. It renders and interactively generates CSOs that consist of a single line segment. The line segment can be rendered as an arrow; it can be used to measure distances. We are going to add some more editors later. In order to have the same look and feel for all types of CSOs, add a `SoCSOVisualizationSettings` module as seen below. The module is used to adjust visual parameters like color and line style for CSOs. Also add a `CSOManager` module to organize CSOs and CSOGroups within a network. -![Initial Network](images/tutorials/dataobjects/contours/Ex6_1.png "Initial Network") +![Initial network](images/tutorials/dataobjects/contours/Ex6_1.png "Initial network") We are now able to create lines in the `View2D`. You can also modify the lines by dragging the seed points to a different location. @@ -41,7 +41,7 @@ Add a `CSOLabelRenderer` module to your network and connect it to a `SoGroup`. A ![CSOLabelRenderer](images/tutorials/dataobjects/contours/Ex6_14.png "CSOLabelRenderer") -We now want to customize the details to be shown for each distance line. Open the panel of the `CSOLabelRenderer`. You can see the two parameters *labelString* and *labelName*. The *labelString* is set to the *ID* of the CSO. The *labelName* is set to a static text and the *label* property of the CSO. The label can be defined in the module `CSOManager`. You can do this, but we are not defining a name for each contour in this example. +We now want to customize the details to be shown for each distance line. Open the panel of the `CSOLabelRenderer`. You can see the two parameters labelString and labelName. The labelString is set to the *ID* of the CSO. The labelName is set to a static text and the *label* property of the CSO. The label can be defined in the module `CSOManager`. You can do this, but we are not defining a name for each contour in this example. Enter the following to the panel of the `CSOLabelRenderer` module: {{< highlight filename="CSOLabelRenderer" >}} @@ -53,7 +53,7 @@ deviceOffsetY = 0 ``` {{}} -We are setting the *labelName* to a static text showing the type of the CSO and the unique *ID* of the contour. We also define the *labelString* to the static description of the measurement and the *length* parameter of the CSO. +We are setting the labelName to a static text showing the type of the CSO and the unique *ID* of the contour. We also define the labelString to the static description of the measurement and the *length* parameter of the CSO. ![labelString and labelName](images/tutorials/dataobjects/contours/Example6_5.png "labelString and labelName") @@ -66,21 +66,21 @@ labelString = f"Length: {cso.getLength():.2f} mm" In order to see all possible parameters of a CSO, add a `CSOInfo` module to your network and connect it to the `CSOManager`. The geometric information of the selected CSO from `CSOManager` can be seen there. -![CSOInfo](images/tutorials/dataobjects/contours/Ex6_CSOInfo.png "CSOInfo") +![CSOInfo showing geometric information](images/tutorials/dataobjects/contours/Ex6_CSOInfo.png "CSOInfo showing geometric information") -For labels shown on grayscale images, it makes sense to add a shadow. Open the panel of the `SoCSOVisualizationSettings` module and on tab *Misc* check the option *Should render shadow*. This increases the readability of your labels. +For labels shown on grayscale images, it makes sense to add a shadow. Open the panel of the `SoCSOVisualizationSettings` module and on tab *Misc* check the option Should render shadow. This increases the readability of your labels. {{< imagegallery 2 "images/tutorials/dataobjects/contours/" "Ex6_NoShadow" "Ex6_Shadow" >}} -If you want to define your static text as a parameter in multiple labels, you can open the panel of the `CSOLabelRenderer` module and define text as *User Data*. The values can then be used in Python via *userData*. +If you want to define your static text as a parameter in multiple labels, you can open the panel of the `CSOLabelRenderer` module and define text as *User Data*. The values can then be used in Python via userData. -![User Data](images/tutorials/dataobjects/contours/Ex6_Parameters.png "User Data") +![Using userData to generate labels](images/tutorials/dataobjects/contours/Ex6_Parameters.png "Using userData to generate labels") You can also add multiple CSO editors to see the different options. Add the `SoCSORectangleEditor` module to your workspace and connect it to the `SoGroup` module. As we now have two different editors, we need to tell the `CSOLabelRenderer` which CSO is to be rendered. Open the panel of the `SoCSODistanceLineEditor`. You can see the field Extension Id set to *distanceLine*. Open the panel of the `SoCSORectangleEditor`. You can see the field Extension Id set to *rectangle*. ![Extension ID](images/tutorials/dataobjects/contours/Ex6_ExtensionID.png "Extension ID") -We currently defined the *labelName* and *labelString* for the distance line. If we want to define different labels for different types of CSOs, we have to change the `CSOLabelRenderer` Python script. Open the panel of the `CSOLabelRenderer` and change the Python code to the following: +We currently defined the labelName and labelString for the distance line. If we want to define different labels for different types of CSOs, we have to change the Python script of the `CSOLabelRenderer`. Open the panel of the `CSOLabelRenderer` and change the Python code to the following: {{< highlight filename="CSOLabelRenderer" >}} ```Python @@ -98,19 +98,19 @@ deviceOffsetY = 0 ``` {{}} -![SoCSORectangleEditor](images/tutorials/dataobjects/contours/Ex6_LineAndRectangle.png "SoCSORectangleEditor") +![Create a label based on the type of CSO](images/tutorials/dataobjects/contours/Ex6_LineAndRectangle.png "Create a label based on the type of CSO") If you now draw new CSOs, you will notice that you still always create distance lines. Open the panel of the `SoView2DCSOExtensibleEditor`. You can see that the Creator Extension Id is set to *__default*. By default, the first found eligible editor is used to create a new CSO. In our case this is the `SoCSODistanceLineEditor`. -![SoCSORectangleEditor](images/tutorials/dataobjects/contours/Ex6_DefaultExtension.png "SoCSORectangleEditor") +![Creator Extension Id in SoView2DCSOExtensibleEditor](images/tutorials/dataobjects/contours/Ex6_DefaultExtension.png "Creator Extension Id in SoView2DCSOExtensibleEditor") Change Creator Extension Id to *rectangle*. -![SoCSORectangleEditor & SoView2DCSOExtensibleEditor ](images/tutorials/dataobjects/contours/Ex6_8.png "SoCSORectangleEditor & SoView2DCSOExtensibleEditor") +![Use the Creator Extension Id of SoView2DCSOExtensibleEditor to enable a specific editor](images/tutorials/dataobjects/contours/Ex6_8.png "Use the Creator Extension Id of SoView2DCSOExtensibleEditor to enable a specific editor") Newly created CSOs are now rectangles. The label values are shown as defined in the `CSOLabelRenderer` and show the length and the area of the rectangle. -![Labeled Rectangle in View2D](images/tutorials/dataobjects/contours/Ex6_9.png "Labeled Rectangle in View2D") +![Labeled rectangle in View2D](images/tutorials/dataobjects/contours/Ex6_9.png "Labeled rectangle in View2D") {{}} The *Length* in the context of rectangles represents the perimeter of the rectangle, calculated as *2a + 2b*, where *a* and *b* are the lengths of the two sides of the rectangle. @@ -125,9 +125,6 @@ You will find a lot more information in the `CSOInfo` module for your rectangles CSO Editor - PCA X Ext. - PCA Y Ext. - PCA Z Ext. Length Area @@ -137,96 +134,63 @@ You will find a lot more information in the `CSOInfo` module for your rectangles SoCSOPointEditor n.a. n.a. - n.a. - n.a. - n.a. SoCSOAngleEditor - - - - - + Length of all lines (in mm) + n.a. SoCSOArrowEditor - - - - - + n.a. + n.a. SoCSODistanceLineEditor - - - Length (in mm) - + n.a. SoCSODistancePolylineEditor - - - Length of all lines (in mm) - + n.a. SoCSOEllipseEditor - - - Perimeter (in mm) Area (in mm2) SoCSORectangleEditor - - - Length of all sides (in mm) Area (in mm2) SoCSOSplineEditor - - - - - + Length of all lines (in mm) + If closed: Area (in mm2) SoCSOPolygonEditor - - - Length of all lines (in mm) - + If closed: Area (in mm2) SoCSOIsoEditor - - - - - + Length of all lines (in mm) + Area (in mm2) SoCSOLiveWireEditor - - - - - + Length of all lines (in mm) + If closed: Area (in mm2) ## Summary * Custom labels can be added to contours using the `CSOLabelRenderer` module. -* Python scripting is used within the `CSOLabelRenderer` module to customize label content based on CSO types. +* Python scripting is used within the `CSOLabelRenderer` module to customize label content. * Visual properties can be adjusted within the `CSOLabelRenderer` and the `SoCSOVisualizationSettings` modules to improve label visibility and appearance. {{< networkfile "examples/data_objects/contours/example6/ContourExample6.mlab" >}} diff --git a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample7.md b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample7.md index d86083a85..9e8e46ba0 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample7.md +++ b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample7.md @@ -24,7 +24,7 @@ In this example, we are using the module `CSOListContainer` instead of the `CSOM ![CSOListContainer](images/tutorials/dataobjects/contours/Example_7_2.png "CSOListContainer") -We will create multiple CSOs by using the `SoCSOEllipseEditor` and dynamically add these CSOs to different groups via Python scripting depending on their size. CSOs larger than a configurable threshold will be drawn in red, small CSOs will be drawn in green. The colors will also be adapted if we manually resize the contours. +We will create multiple CSOs by using the `SoCSOEllipseEditor` and dynamically add these CSOs to different groups via Python scripting depending on their area. CSOs larger than a configurable threshold will be drawn in red, smaller CSOs will be drawn in green. The colors will also be adapted if we manually resize the contours. ## Steps to Do @@ -33,20 +33,20 @@ Add a `LocalImage` and a `View2D` module to your workspace and connect them as s Add a `SoCSOEllipseEditor` and a `CSOListContainer` to the `SoView2DCSOExtensibleEditor` -![Initial Network](images/tutorials/dataobjects/contours/Example_7_3.png "Initial Network") +![Initial network](images/tutorials/dataobjects/contours/Example_7_3.png "Initial network") You are now able to draw CSOs. Create a separate directory for this tutorial and save your network in this empty directory. This makes the final structure easier to read. ### Create a Local Macro Module -Select the module `CSOListContainer` and open menu {{}}. Enter some details about your new local macro module and click *Finish*. Leave the already defined output as is. +Select the module `CSOListContainer` and open menu {{}}. Enter some details about your new local macro module and click Finish. Leave the already defined output as is. -![Create Local Macro](images/tutorials/dataobjects/contours/Example_7_4.png "Create Local Macro") +![Create a local macro](images/tutorials/dataobjects/contours/Example_7_4.png "Create a local macro") -The appearance of the `CSOListContainer` module changes, because it is a macro module named `csoList` now. +The appearance of the `CSOListContainer` module changes, because it is a macro module named *csoList* now. -![Network with new local macro](images/tutorials/dataobjects/contours/Example_7_5.png "Network with new local macro") +![Network with the new local macro](images/tutorials/dataobjects/contours/Example_7_5.png "Network with the new local macro") The behavior of your network does not change. You can still draw the same CSOs and they are still managed by the `CSOListContainer` module. The reason why we created a local macro with a single module inside is that we want to add Python scripting to the module. Python scripts can only be added to macro modules. @@ -54,7 +54,7 @@ Open the context menu of your `csoList` module {{< mousebutton "right" >}} and s The MeVisLab text editor MATE opens, showing your *.script* file. You can see the output of your module as *CSOListContainer.outCSOList*. We want to define a threshold for the color of our CSOs. For this, add another field to the *Parameters* section of your script file named areaThreshold. Define the type as *Float* and value as *2000.0*. -In order to call Python functions, we also need a Python file. Add a *Commands* section and define the *source* of the Python file as *$(LOCAL)/csoList.py*. Also add an *initCommand* as *initCSOList*. The initCommand defines the Python function that is called whenever the module is added to the workspace or reloaded. +In order to call Python functions, we also need a Python file. Add a *Commands* section and define the *source* of the Python file as *$(LOCAL)/csoList.py*. Also add an initCommand as *initCSOList*. The initCommand defines the Python function that is called whenever the module is added to the workspace or reloaded. {{< highlight filename="csoList.script" >}} ```Stan @@ -77,14 +77,14 @@ Commands { ``` {{}} -Right-click {{< mousebutton "right" >}} on the *initCSOList* command and select {{< menuitem "Create Python Function initCSOList" >}}. The Python file and the function are generated automatically. +Right-click {{< mousebutton "right" >}} on the initCSOList command and select {{< menuitem "Create Python Function 'initCSOList'" >}}. The Python file and the function are generated automatically. -Back in MeVisLab, the new field areaThreshold can be seen in Module Inspector when selecting your module. The next step is to write the Python function *initCSOList*. +Back in MeVisLab, the new field areaThreshold can be seen in Module Inspector when selecting your module. The next step is to write the Python function initCSOList. ### Write Python Script Whenever the local macro module is added to the workspace or reloaded, new CSOLists shall be created and we need a possibility to update the lists whenever a new CSO has been created or existing contours changed. -Define a function *setupCSOList*. +Define a function setupCSOList. {{< highlight filename="csoList.py" >}} ```Python @@ -107,11 +107,11 @@ def _getCSOList(): The function gets the current CSOList from the output field of the `CSOListContainer`. Initially, it should be empty. If not, we want to start with an empty list; therefore, we remove all existing CSOs. -We also create two new CSO lists: one list for small contours, one list for larger contours, depending on the defined areaThreshold from the module's fields. +We also create two new CSOGroups: one list for small contours, one list for larger contours, depending on the defined areaThreshold of the module. Additionally, we also want to define different colors for the CSOs in the lists. Small contours shall be drawn in green, large contours shall be drawn in red. -In order to listen for changes on the contours, we need to register for notifications. Create a new function *registerForNotification*. +In order to listen for changes on the contours, we need to register for notifications. Create a new function registerForNotification. {{< highlight filename="csoList.py" >}} ```Python @@ -134,13 +134,13 @@ def _getAreaThreshold(): ``` {{}} -The function gets all currently existing CSOs from the `CSOListContainer`. Then, we register for notifications on this list. Whenever the notification *NOTIFICATION_CSO_FINISHED* is sent in the current context, we call the function *csoFinished*. +The function gets all currently existing CSOs from the `CSOListContainer`. Then, we register for notifications on this list. Whenever the notification *NOTIFICATION_CSO_FINISHED* is sent in the current context, we call the function csoFinished. -The *csoFinished* function again needs all existing contours. We walk through each CSO in the list and remove it from all groups. As we do not know which CSO has been changed from the notification, we evaluate the area of each CSO and add them to the correct list again. +The csoFinished function again needs all existing contours. We walk through each CSO in the list and remove it from all groups. As we do not know which CSO has been changed from the notification, we evaluate the area of each CSO and add them to the correct list again. -The function *getAreaThreshold* returns the current value of our parameter field areaThreshold. +The function _getAreaThreshold returns the current value of our parameter field areaThreshold. -Now, we can call our functions in the *initCSOList* function and test our module. +Now, we can call our functions in the initCSOList function and test our module. {{< highlight filename="csoList.py" >}} ```Python @@ -182,12 +182,12 @@ def _getCSOList(): ``` {{}} -![Final Network](images/tutorials/dataobjects/contours/Example_7_6.png "Final Network") +![Final network](images/tutorials/dataobjects/contours/Example_7_6.png "Final network") -If you now draw contours, they are automatically colored depending on the size. You can also edit existing contours and the color is adapted. +If you now draw contours, they are automatically colored depending on their area. You can also edit existing contours and the color is adapted. ## Summary -* The module `CSOListContainer` provides a lightweight Python interface to manage contours. +* The module `CSOListContainer` provides a lightweight container to manage contours. * It makes sense to encapsulate a single module into a macro module to provide additional functionalities via Python scripting. * Notifications can be used to react on events. diff --git a/mevislab.github.io/content/tutorials/dataobjects/curves/curvesexample1.md b/mevislab.github.io/content/tutorials/dataobjects/curves/curvesexample1.md index bfc0a786a..b803772a8 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/curves/curvesexample1.md +++ b/mevislab.github.io/content/tutorials/dataobjects/curves/curvesexample1.md @@ -27,20 +27,20 @@ A curve requires x- and y-coordinates to be printed. You can use the `CurveCreat Add the modules to your workspace and connect them as seen below. -![Example Network](images/tutorials/dataobjects/curves/example_network.png "Example Network") +![Example network](images/tutorials/dataobjects/curves/example_network.png "Example network") ### Creating a Curve Click on the output of the `CurveCreator` module and open the Output Inspector. ![Empty Output Inspector](images/tutorials/dataobjects/curves/OutputInspector_empty.png "Empty Output Inspector") -Double-click {{}} on the `CurveCreator` module and open the Panel. +Double-click {{}} on the `CurveCreator` module and open the panel. -![CurveCreator Module](images/tutorials/dataobjects/curves/CurveCreatorModule.png "CurveCreator Module") +![CurveCreator module](images/tutorials/dataobjects/curves/CurveCreatorModule.png "CurveCreator module") -You can see a large input field Curve Table. Here you can enter the x- and y-values of your curve. The values of the first column will become the x-values and the second column will become the y-series. Comment lines start with a '#' character. +You can see a large input field Curve Table. Here, you can enter the x- and y-values of your curve. The values of the first column will become the x-values and the second column will become the y-series. Comment lines start with a '#' character. -Enter the following into the *Curve Table*: +Enter the following into the Curve Table: {{< highlight filename="Curve Table" >}} ```Text # My first curve @@ -55,12 +55,12 @@ Enter the following into the *Curve Table*: ``` {{}} -Now, your *Output Inspector* shows a yellow line through the previously entered coordinates. Exactly the same curve is shown in the `SoRenderArea`. +Now, your Output Inspector shows a yellow line through the previously entered coordinates. Exactly the same curve is shown in the `SoRenderArea`. ![SoRenderArea](images/tutorials/dataobjects/curves/SoRenderArea.png "SoRenderArea") ### Creating Multiple Curves -Now, update the *Curve Table*, so that you are using three columns and click *Update* {{}}: +Now, update the Curve Table, so that you are using three columns and click Update {{}}: {{< highlight filename="Curve Table" >}} ```Text # My first curves @@ -75,16 +75,16 @@ Now, update the *Curve Table*, so that you are using three columns and click *Up ``` {{}} -You can see two curves. The second and third columns are printed as separate curves. Both appear yellow. After checking *Split columns into data sets*, you will see one yellow and one red curve. +You can see two curves. The second and third columns are printed as separate curves. Both appear yellow. After checking Split columns into data sets, you will see one yellow and one red curve. {{}} -If the flag *Split columns into data sets* is set to *TRUE*, then a table with more than two columns is split into different *CurveData* objects. This gives the user the possibility to assign a different style and title for each series. +If the flag Split columns into data sets is set to *TRUE*, then a table with more than two columns is split into different *CurveData* objects. This gives the user the possibility to assign a different style and title for each series. ### Titles and Styles -Let's do this. Open the panel of the `SoDiagram2D` module and check *Draw legend*. Enter *"Curve1 Curve2"* into the *Title(s)* text box of the `CurveCreator` module and click *Update* {{}}. +Let's do this. Open the panel of the `SoDiagram2D` module and check Draw legend. Enter *"Curve1 Curve2"* into the Title(s) text box of the `CurveCreator` module and click Update {{}}. -![SoRenderArea with Legend](images/tutorials/dataobjects/curves/SoRenderArea2.png "SoRenderArea with Legend") +![SoRenderArea with legend](images/tutorials/dataobjects/curves/SoRenderArea2.png "SoRenderArea with legend") You can also define a different location of the legend and set font sizes. @@ -94,16 +94,16 @@ Now, open the panel of the `StylePalette` module. The `StylePalette` module allows you to define twelve different styles for curves. Initially, without manual changes, the styles are applied one after the other. The first curve gets style 1, the second curve style 2, and so on. -Open the panel of your `CurveCreator` module again and define *Curve Style(s)* as *"3 6"*. *Update* {{}} your curves. +Open the panel of your `CurveCreator` module again and define Curve Style(s) as *"3 6"*. Update {{}} your curves. ![StylePalette applied](images/tutorials/dataobjects/curves/StylePalette_applied.png "StylePalette applied") -You now applied the style three for your first curve and style six for the second. This is how you can create twelve different curves with unique appearance. +You now applied the style three for your first curve and style six for the second. This is how you can create twelve different curves with a unique appearance each. ### Using Multiple Tables for Curve Generation In addition to adding multiple columns for different y-coordinates, you can also define multiple tables as input, so that you can also have different x-coordinates for multiple curves. -Update the *Curve Table* as defined below and click *Update* {{}}: +Update the Curve Table as defined below and click Update {{}}: {{< highlight filename="Curve Table" >}} ```Text # My first curves @@ -145,7 +145,7 @@ For more complex visualizations, you can also use *Matplotlib*. See examples at * Details of the different curves can be visualized by using the `SoDiagram2D` module. {{}} -The attached example network shows the curves after clicking *Update* on `CurveCreator` module. +The attached example network shows the curves after clicking Update on `CurveCreator` module. {{}} {{< networkfile "examples/data_objects/curves/example1/Curves.mlab" >}} diff --git a/mevislab.github.io/content/tutorials/dataobjects/markerobjects.md b/mevislab.github.io/content/tutorials/dataobjects/markerobjects.md index bca3d3ebc..17dbaaf0b 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/markerobjects.md +++ b/mevislab.github.io/content/tutorials/dataobjects/markerobjects.md @@ -14,34 +14,34 @@ menu: --- # Markers in MeVisLab {#MarkersInMeVisLab} -In MeVisLab you can attach markers to images and other data objects. In this example you will see how to create, process, and use markers. +In MeVisLab, you can attach markers to images and other data objects. In this example, you will see how to create, process, and use markers. ## Creation and Rendering To create markers, you can use a marker editor, for example, the `SoView2DMarkerEditor`. Connect this editor to a viewer as shown below. Now you can interactively create new markers. Connect the module `XMarkerListContainer` to your marker editor to store markers in a list. -![Create Markers](images/tutorials/dataobjects/markers/DO_Markers_01.png "Create Markers") +![Create markers](images/tutorials/dataobjects/markers/DO_Markers_01.png "Create markers") Using the `StylePalette` module, you can define a style for your markers. In order to set different styles for different markers, change the field Color Mode in the panel of `SoView2DMarkerEditor` to *Index*. -![Style of Markers](images/tutorials/dataobjects/markers/DO_Markers_08.png "Style of Markers") +![Style of markers](images/tutorials/dataobjects/markers/DO_Markers_08.png "Style of markers") With the help of the module `So3DMarkerRenderer`, markers of an `XMarkerList` can be rendered in 3D. -![Rendering of Markers](images/tutorials/dataobjects/markers/DO_Markers_09.png "Rendering of Markers") +![Rendering of markers in 2D and in 3D](images/tutorials/dataobjects/markers/DO_Markers_09.png "Rendering of markers in 2D and in 3D") ## Working With Markers {{}} It is possible to convert other data objects into markers and also to convert markers into other data objects. -It is, for example, possible to set markers by using the `MaskToMarkers` module and later on generate a surface object from a list of markers using the `MaskToSurface` module. Marker conversion can also be done by various other modules, listed in [/Modules/Geometry/Markers]. +It is, for example, possible to set markers by using the `MaskToMarkers` module and later on generate a surface object from a list of markers using the `MaskToSurface` module. Marker conversion can also be done by various other modules, listed in [ *Modules* → *Geometry* → *Marker* ]. {{}} -Learn how to convert markers by building the following network. Press the *Reload* buttons of the modules `MaskToMarkers` and `MarkersToSurface` to enable the conversion. Now you can see both the markers and the created surface in the module `SoExaminerViewer`. Use the toggle options of the modules `SoToggle` and `SoWEMRenderer` to enable or disable the visualization of markers and surface. +Learn how to convert markers by building the following network. Press the Update/Apply buttons of the modules `MaskToMarkers` and `MarkersToSurface` to enable the conversion. Now you can see both the markers and the created surface in the module `SoExaminerViewer`. Use the toggle options of the modules `SoToggle` and `SoWEMRenderer` to enable or disable the visualization of markers and surface. {{}} -Make sure to set *Lower Threshold* of the `MaskToMarkers` module to 1000, so that the 3D object is rendered correctly. +Make sure to set Lower Threshold of the `MaskToMarkers` module to *1000*, so that the 3D object is rendered correctly. {{}} -![Convert Markers](images/tutorials/dataobjects/markers/DO_Markers_02.png "Convert Markers") +![Convert markers to surface](images/tutorials/dataobjects/markers/DO_Markers_02.png "Convert markers to surface") ## Exercise Get the HU value of the image at your markers location. diff --git a/mevislab.github.io/content/tutorials/dataobjects/markers/markerexample1.md b/mevislab.github.io/content/tutorials/dataobjects/markers/markerexample1.md index 2fbe51ec5..f18460452 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/markers/markerexample1.md +++ b/mevislab.github.io/content/tutorials/dataobjects/markers/markerexample1.md @@ -25,31 +25,31 @@ In this example, we will measure the distance between one position in an image t ### Develop Your Network Add the following modules and connect them as shown. -We changed the names of the modules `SoView2DMarkerEditor` and `XMarkerListContainer`, to distinguish these modules from two similar modules we will add later on. Open the panel of `SoView2DMarkerEditor` and select the tab *Drawing*. Now choose the *Color* *red*. +We changed the names of the modules `SoView2DMarkerEditor` and `XMarkerListContainer`, to distinguish these modules from two similar modules we will add later on. Open the panel of `SoView2DMarkerEditor` and select the tab *Drawing*. Now choose the Color *red*. -![Marker Color](images/tutorials/dataobjects/markers/DO_Markers_03.png "Marker Color") +![Marker color](images/tutorials/dataobjects/markers/DO_Markers_03.png "Marker color") As a next step, add two more modules: `SoView2DMarkerEditor` and `XMarkerListContainer`. Change their names and the marker color to *green* and connect them as shown. We also like to change the mouse button you need to press in order to create a marker. This allows to place both types of markers, the red ones and the green ones. In order to do this, open the panel of `GreenMarker`. Under *Buttons*, you can adjust which button needs to be pressed in order to place a marker. Select the *Button2* (the middle button of your mouse {{< mousebutton "middle" >}}) instead of *Button1* (the left mouse button {{< mousebutton "left" >}}). -In addition to that, we like to allow only one green marker to be present. If we place a new marker, the old marker should vanish. For this, select the *Max Size* to be one and select *Overflow Mode: Remove All*. +In addition to that, we like to allow only one green marker to be present. If we place a new marker, the old marker should vanish. For this, select the Max Size to be *1* and select Overflow Mode *Remove All*. -![Marker Editor Settings](images/tutorials/dataobjects/markers/DO_Markers_04.png "Marker Editor Settings") +![Marker editor settings](images/tutorials/dataobjects/markers/DO_Markers_04.png "Marker editor settings") ### Create Markers of Different Type Now, we can place as many red markers as we like, using the left mouse button {{< mousebutton "left" >}} and only one green marker using the middle mouse button {{< mousebutton "middle" >}}. -![Two Types of Markers](images/tutorials/dataobjects/markers/DO_Markers_05.png "Two Types of Markers") +![Two types of markers](images/tutorials/dataobjects/markers/DO_Markers_05.png "Two types of markers") ### Calculate the Distance Between Markers -We like to calculate the minimum and maximum distance of the green marker to all red markers. In order to do this, add the module `DistanceFromXMarkerList` and connect it to `RedMarkerList`. Open the panels of `DistanceFromXMarkerList` and `GreenMarkerList`. Now, draw a parameter connection from the coordinate of the green marker, which is stored in the field *Current Item* Position in the panel of `GreenMarkerList`, to the field Position of `DistanceFromXMarkerList`. You can now press *Calculate Distance* in the panel of `DistanceFromXMatkerList` to see the result, meaning the distance of the green marker to all red markers in the panel of `DistanceFromXMarkerList`. +We like to calculate the minimum and maximum distance of the green marker to all red markers. In order to do this, add the module `DistanceFromXMarkerList` and connect it to `RedMarkerList`. Open the panels of `DistanceFromXMarkerList` and `GreenMarkerList`. Now, establish a parameter connection from the coordinate of the green marker, which is stored in the field *Current Item* Position in the panel of `GreenMarkerList`, to the field Position of `DistanceFromXMarkerList`. You can now press Calculate Distance in the panel of `DistanceFromXMarkerList` to see the result, meaning the distance of the green marker to all red markers in the panel of `DistanceFromXMarkerList`. ![Module DistanceFromXMarkerList](images/tutorials/dataobjects/markers/DO_Markers_06.png "Module DistanceFromXMarkerList") ### Automation of Distance Calculation To automatically update the calculation when placing a new marker, we need to tell the module `DistanceFromXMarkerList` **when** a new green marker is placed. Open the panels of `DistanceFromXMarkerList` and `GreenMarker` and draw a parameter connection from the field Currently busy in the panel of `GreenMarker` to Calculate Distance in the panel of `DistanceFromXMarkerList`. If you now place a new green marker, the distance from the new green marker to all red markers is calculated automatically. -![Calculation of Distance between Markers](images/tutorials/dataobjects/markers/DO_Markers_07.png "Calculation of Distance between Markers") +![Automatic calculation of distance between markers](images/tutorials/dataobjects/markers/DO_Markers_07.png "Automatic calculation of distance between markers") {{}} Another example for using a `SoView2DMarkerEditor` module can be found at [Image Processing - Example 3: Region Growing](tutorials/image_processing/image_processing3 "Image Processing - Example 3: Region Growing") diff --git a/mevislab.github.io/content/tutorials/dataobjects/surfaceobjects.md b/mevislab.github.io/content/tutorials/dataobjects/surfaceobjects.md index 755182d77..3f9b28315 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/surfaceobjects.md +++ b/mevislab.github.io/content/tutorials/dataobjects/surfaceobjects.md @@ -15,7 +15,7 @@ menu: # Surface Objects (WEMs){#WEMs} ## Introduction -In MeVisLab it is possible to create, visualize, process, and manipulate surface objects, also known as polygon meshes. Here, we call surface objects *Winged Edge Mesh*, in short WEM. In this chapter you will get an introduction into WEMs. In addition, you will find examples on how to work with WEMs. For more information on WEMs, take a look at the {{< docuLinks "/Resources/Documentation/Publish/SDK/ToolBoxReference/WEMDataStructure.html" "MeVisLab Toolbox Reference" >}}. If you like to know which WEM formats can be imported into MeVisLab, take a look at the *assimp* documentation [here](https://github.com/assimp/assimp). +In MeVisLab, it is possible to create, visualize, process, and manipulate surface objects, also known as polygon meshes. Here, we call surface objects *Winged Edge Mesh*, in short WEM. In this chapter you will get an introduction into WEMs. In addition, you will find examples on how to work with WEMs. For more information on WEMs, take a look at the {{< docuLinks "/Resources/Documentation/Publish/SDK/ToolBoxReference/WEMDataStructure.html" "MeVisLab Toolbox Reference" >}}. If you like to know which WEM formats can be imported into MeVisLab, take a look at the *assimp* documentation [here](https://github.com/assimp/assimp). [//]: <> (MVL-653) @@ -23,7 +23,7 @@ In MeVisLab it is possible to create, visualize, process, and manipulate surface To explain WEMs in MeVisLab, we will build a network that shows the structure and the characteristics of WEMs. We will start the example by generating a WEM forming a cube. With this, we will explain structures of WEMs called *Edges*, *Nodes*, *Surfaces*, and *Normals*. ### Initialize a WEM -Add the module `WEMInitialize` to your workspace, open its panel, and select a *Cube*. In general, a WEM is made up of surfaces. Here all surfaces are squares. In MeVisLab it is common to build WEMs out of triangles. +Add the module `WEMInitialize` to your workspace, open its panel, and select a *Cube*. In general, a WEM is made up of surfaces. Here all surfaces are quadrilaterals. In MeVisLab it is common to build WEMs out of triangles. ![WEM initializing](images/tutorials/dataobjects/surfaces/WEM_01_1.png "WEM initializing") @@ -35,28 +35,27 @@ For rendering WEMs, you can use the module `SoWEMRenderer` in combination with t The geometry of WEMs is given by different structures. Using specialized WEM renderer modules, all structures can be visualized. #### Edges -Add and connect the module `SoWEMRendererEdges` to your workspace to enable the rendering of WEM Edges. Here, we manipulated the line thickness to make the lines of the edges thicker. -![WEM Edges](images/tutorials/dataobjects/surfaces/WEM_01_3.png "WEM Edges") +Add and connect the module `SoWEMRendererEdges` to your workspace to enable the rendering of WEM edges. Here, we manipulated the line thickness to make the lines of the edges thicker. +![WEM edges](images/tutorials/dataobjects/surfaces/WEM_01_3.png "WEM edges") #### Nodes Nodes mark the corner points of each polygon. Therefore, nodes define the geometric properties of every WEM. To visualize the nodes, add and connect the module `SoWEMRendererNodes` as shown. By default, the nodes are visualized with an offset to the position they are located in. We reduced the offset to be zero, increased the point size, and changed the color. -![WEM Nodes](images/tutorials/dataobjects/surfaces/WEM_01_4.png "WEM Nodes") +![WEM nodes](images/tutorials/dataobjects/surfaces/WEM_01_4.png "WEM nodes") #### Faces -Between the nodes and alongside the edges, surfaces are created. The rendering of these surfaces can be enabled and disabled using the panel of `SoWEMRenderer`. -![WEM Faces](images/tutorials/dataobjects/surfaces/WEM_01_5.png "WEM Faces") +Between the nodes and alongside the edges, faces are created. The rendering of these faces can be enabled and disabled using the panel of `SoWEMRenderer`. +![WEM faces](images/tutorials/dataobjects/surfaces/WEM_01_5.png "WEM faces") #### Normals -Normals display the orthogonal vector either to the faces (Face Normals) or to the nodes (Nodes Normals). With the help of the module `SoWEMRendererNormals`, these structures can be visualized. -![WEM normal editor](images/tutorials/dataobjects/surfaces/WEM_01_6.png "WEM normal editor") +Normals display the orthogonal vector either to the faces (face normals) or to the nodes (nodes normals, which are just the average of adjacent face normals). With the help of the module `SoWEMRendererNormals`, these structures can be visualized. +![Network for rendering normals and nodes of a WEM](images/tutorials/dataobjects/surfaces/WEM_01_6.png "Network for rendering normals and nodes of a WEM") {{< imagegallery 2 "images/tutorials/dataobjects/surfaces/" "WEMNodeNormals" "WEMFaceNormals">}} ### WEMs in MeVisLab {#WEMsInMevislab} -In MeVisLab, WEMs can consist of triangles, squares, or other polygons. Most common in MeVisLab are surfaces composed of triangles, as shown in the following example. With the help of the module `WEMLoad`, existing WEMs can be loaded into the network. +In MeVisLab, WEMs can consist of triangles, quadrilaterals, or other polygons. Most common in MeVisLab are surfaces composed of triangles, as shown in the following example. With the help of the module `WEMLoad`, existing WEMs can be loaded into the network. {{< imagegallery 3 "images/tutorials/dataobjects/surfaces/" "WEMTriangles" "WEMNetwork" "WEMSurface" >}} ## Summary * WEMs are polygon meshes, in most cases composed of triangles. * WEM's geometry is determined by nodes, edges, faces, and normals, which can be visualized using renderer modules. - diff --git a/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample1.md b/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample1.md index 8ea3d5d5a..abb69b939 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample1.md +++ b/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample1.md @@ -8,7 +8,7 @@ tags: ["Beginner", "Tutorial", "Data Objects", "3D", "Surfaces", "Meshes", "WEM" menu: main: identifier: "surfaceexample1" - title: "Creation of Surface Objects (WEMs) From an Image Via WEMIsoSurface Module" + title: "Creation of Surface Objects (WEMs) from an Image via WEMIsoSurface Module" weight: 705 parent: "surfaces" --- @@ -18,30 +18,30 @@ menu: {{< youtube "-KnZ5a27T0c">}} ## Introduction -In this example you will learn how to create a Winged Edge Mesh (WEM). There are several approaches on creating WEMs, a few of them are shown in this example. Instead of creating WEMs, they can also be imported, see chapter [Surface Objects (WEM)](tutorials/dataobjects/surfaceobjects). +In this example, you will learn how to create a Winged Edge Mesh (WEM). There are several approaches of creating WEMs, a few of them are shown in this example.Additionally to creating WEMs, they can also be imported, see chapter [Surface Objects (WEM)](tutorials/dataobjects/surfaceobjects). ## Steps to Do ### From Image to Surface: Generating WEMs out of Voxel Images -At first, we will create a WEM out of a voxel image using the module `WEMIsoSurface`. Add and connect the shown modules. Load the image *$(DemoDataPath)/Bone.tiff* and set the *Iso Min. Value* in the panel of `WEMIsoSurface` to 1200. Tick the box *Use image max. value*. The module `WEMIsoSurface` creates surface objects out of all voxels with an isovalue equal or above 1200 (and smaller than the image max value). The module `SoWEMRenderer` can now be used to generate an Open Inventor scene, which can be displayed by the module `SoExaminerViewer`. +At first, we will create a WEM out of a voxel image using the module `WEMIsoSurface`. Add and connect the shown modules. Load the image *$(DemoDataPath)/Bone.tiff* and set the Iso Min. Value in the panel of `WEMIsoSurface` to *1200*. Tick the checkbox Use image max. value. The module `WEMIsoSurface` creates surface objects out of all voxels with an isovalue equal or above *1200* (and smaller than the image's maximum value). The module `SoWEMRenderer` can now be used to generate an Open Inventor scene, which can be displayed by the module `SoExaminerViewer`. -![WEM](images/tutorials/dataobjects/surfaces/DO6_01.png "WEM") +![WEM from a voxel image](images/tutorials/dataobjects/surfaces/DO6_01.png "WEM from a voxel image") ### From Surface to Image: Generating Voxel Images out of WEM -It is not only possible to create WEMs out of voxel images. You can also transform WEMs into voxel images: Add and connect the modules `VoxelizeWEM` and `View2D` as shown and press the *Update* button of the module `VoxelizeWEM`. +It is not only possible to create WEMs out of voxel images. You can also transform WEMs into voxel images: Add and connect the modules `VoxelizeWEM` and `View2D` as shown and press the Update button of the module `VoxelizeWEM`. -![WEM](images/tutorials/dataobjects/surfaces/DO6_02.png "WEM") +![Voxel image from a WEM](images/tutorials/dataobjects/surfaces/DO6_02.png "Voxel image from a WEM") ### From Contour to Surface: Generating WEMs out of CSOs Now, we like to create WEMs out of CSOs. To create CSOs, load the network from [Contour Example 2](tutorials/dataobjects/contours/contourexample2) and create some CSOs. Next, add and connect the module `CSOToSurface` to convert CSOs into a surface object. To visualize the created WEM, add and connect the modules `SoWEMRenderer` and `SoExaminerViewer`. -![WEM](images/tutorials/dataobjects/surfaces/DO6_03.png "WEM") +![WEM from CSOs](images/tutorials/dataobjects/surfaces/DO6_03.png "WEM from CSOs") It is also possible to display the WEM in 2D in addition to the original image. In order to do that, add and connect the modules `SoRenderSurfaceIntersection` and `SoView2DScene`. The module `SoRenderSurfaceIntersection` allows to display the voxel image and the created WEM in one viewer using the same coordinates. In its panel, you can choose the color used for visualizing the WEM. The module `SoView2DScene` renders an Open Inventor scene graph into 2D slices. -![WEM](images/tutorials/dataobjects/surfaces/DO6_04.png "WEM") +![WEM in a 2D viewer](images/tutorials/dataobjects/surfaces/DO6_04.png "WEM in a 2D viewer") If you like to transform WEMs back into CSOs, have a look at the module `WEMClipPlaneToCSO`. @@ -52,7 +52,7 @@ If you like to transform WEMs back into CSOs, have a look at the module `WEMClip * WEMs can be transformed into voxel images using `WEMClipPlaneToCSO`. {{}} -Whenever converting voxel data to pixel data, keep the so called **Partial Volume Effect** in mind, see [wikipedia](https://en.wikipedia.org/wiki/Partial_volume_(imaging) "Partial Volume Effect") for details. +Whenever converting voxel data to voxel data, keep the so-called **Partial Volume Effect** in mind, see [wikipedia](https://en.wikipedia.org/wiki/Partial_volume_(imaging) "Partial Volume Effect") for details. {{}} {{< networkfile "examples/data_objects/surface_objects/example1/SurfaceExample1.mlab" >}} diff --git a/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample2.md b/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample2.md index 2ac45fde5..eb6250665 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample2.md +++ b/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample2.md @@ -25,7 +25,7 @@ In this example, you will learn how to modify and process WEMs. ### Develop Your Network #### Modification of WEMs -Use the module `WEMLoad` to load the file *venus.off*. Then, add and connect the shown modules. We like to display the WEM *venus* two times, one time this WEM is modified. You can use the module `WEMModify` to apply modifications. In its panel, change the scale and the size of the WEM. Now, you see two times the `venus` next to each other. +Use the module `WEMLoad` to load the file *venus.off*. Then, add and connect the shown modules. We like to display the WEM *venus* two times, one time this WEM is modified. You can use the module `WEMModify` to apply modifications. In its panel, change the scale and the size of the WEM. Now, you see two times the *venus* next to each other. ![WEMModify](images/tutorials/dataobjects/surfaces/DO7_01.png "WEMModify") @@ -40,17 +40,17 @@ Now, we like to calculate the distance between our two WEMs. In order to do this ![Calculate surface distance](images/tutorials/dataobjects/surfaces/DO7_03.png "Calculate surface distance") #### Annotations in 3D -As a last step, we like to draw the calculated distances as annotations into the image. This is a little bit tricky as we need the module `SoView2DAnnotation` to create annotations in a 3D viewer. Add and connect the following modules as shown. What is done here? We use the module `SoView2D` to display a 2D image in the `SoExaminerViewer`, in addition to the WEMs we already see in the viewer. We do not see an additional image in the viewer, as we chose no proper input image to the module `SoView2D` using the module `ConstantImage` with value 0. Thus, we pretend to have a 2D image, which we can annotate. Now, we use the module `SoView2DAnnotation` to annotate the pretended 2D image, displayed in the viewer of `SoExaminerViewer`. We already used the module `SoView2DAnnotation` in [Contour Example 4](tutorials/dataobjects/contours/contourexample4/). +As a last step, we like to draw the calculated distances as annotations into the image. This is a little bit tricky as we need the module `SoView2DAnnotation` to create annotations in a 3D viewer. Add and connect the following modules as shown. What is done here? We use the module `SoView2D` to display a 2D image in the `SoExaminerViewer`, in addition to the WEMs we already see in the viewer. We do not see an additional image in the viewer, as we chose no proper input image to the module `SoView2D` using the module `ConstantImage` with value *0*. Thus, we pretend to have a 2D image, which we can annotate. Now, we use the module `SoView2DAnnotation` to annotate the pretended 2D image, displayed in the viewer of `SoExaminerViewer`. We already used the module `SoView2DAnnotation` in [Contour Example 4](tutorials/dataobjects/contours/contourexample4/). -In the `SoView2D` module, you need to uncheck the option *Draw image data*. +In the `SoView2D` module, you need to uncheck the option Draw image data. ![Annotation modules](images/tutorials/dataobjects/surfaces/DO7_05.png "Annotation modules") -Now, change the *Annotation Mode* to *User*, as we like to insert custom annotations. In addition, disable to *Show vertical ruler*. +Now, change the Annotation Mode to *User*, as we like to insert custom annotations. In addition, disable to Show vertical ruler. ![Select annotation mode](images/tutorials/dataobjects/surfaces/DO7_06.png "Select annotation mode") -Next, open the tab *Input* and draw parameter connections from the results of the distance calculations, which can be found in the panel of `WEMSufaceDistance`, to the input fields in the panel of `SoView2DAnnotation`. +Next, open the tab *Input* and establish parameter connections from the results of the distance calculations, which can be found in the panel of `WEMSufaceDistance`, to the input fields in the panel of `SoView2DAnnotation`. ![Define annotation parameters](images/tutorials/dataobjects/surfaces/DO7_07.png "Define annotation parameters") diff --git a/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample3.md b/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample3.md index 283bf6449..4b425cc1c 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample3.md +++ b/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample3.md @@ -25,13 +25,13 @@ In these examples, we are showing two different possibilities to interact with t ### Scale, Rotate, and Move a WEM in a Scene We are using a `SoTransformerDragger` module to apply transformations to the visualization of a 3D WEM object via mouse interactions. -Add a `SoCube` and a `SoBackground` module and connect both to a `SoExaminerViewer`. For a better understanding, you should also add a `SoCoordinateSystem` module and connect it to the viewer. Change the *User Transform Mode* to *User Transform Instead Of Input* and set *User Scale* to 2 for *x*, *y*, and *z*. +Add a `SoCube` and a `SoBackground` module and connect both to a `SoExaminerViewer`. For a better understanding, you should also add a `SoCoordinateSystem` module and connect it to the viewer. Change the User Transform Mode to *User Transform Instead Of Input* and set User Scale to *2* for *x*, *y*, and *z*. -![Initial Network](images/tutorials/dataobjects/surfaces/WEMExample3_1.png "Initial Network") +![Initial network](images/tutorials/dataobjects/surfaces/WEMExample3_1.png "Initial network") The `SoExaminerViewer` shows your cube and the world coordinate system. You can interact with the camera (rotate, zoom, and pan), the visualization of the cube itself does not change. It remains in the center of the coordinate system. -![Initial Cube](images/tutorials/dataobjects/surfaces/WEMExample3_2.png "Initial Cube") +![Initial cube](images/tutorials/dataobjects/surfaces/WEMExample3_2.png "Initial cube") Scaling, rotating, and translating the visualization of the cube can be done by using the module `SoTransformerDragger`. @@ -39,9 +39,9 @@ Additionally, add a `SoTransform` module to your network. Add all modules except ![SoTransformerDragger and SoTransform](images/tutorials/dataobjects/surfaces/WEMExample3_3.png "SoTransformerDragger and SoTransform") -Draw parameter connections from *Translation*, *Scale Factor*, and *Rotation* of the `SoTransformerDragger` to the same fields of the `SoTransform` module. +Establish parameter connections from Translation, Scale Factor, and Rotation of the `SoTransformerDragger` to the same fields of the `SoTransform` module. -Opening your SoExaminerViewer now allows you to use handles of the `SoTransformerDragger` to scale, rotate, and move the visualization of the cube. The cube itself remains unchanged in memory, a matrix for translation is applied to the original 3D object's visualization. +Opening your `SoExaminerViewer` now allows you to use handles of the `SoTransformerDragger` to scale, rotate, and move the visualization of the cube. The cube itself remains unchanged in memory, a matrix for translation is applied to the original 3D object's visualization. You can additionally interact with the camera as already done before. @@ -49,7 +49,7 @@ You can additionally interact with the camera as already done before. You need to change the active tool on the right side of the `SoExaminerViewer`. Use the *Pick Mode* for applying transformations and the *View Mode* for adjusting the camera. {{}} -![Moved, Rotated, and Scaled Cube](images/tutorials/dataobjects/surfaces/WEMExample3_4.png "Moved, Rotated, and Scaled Cube") +![Moved, rotated, and scaled cube](images/tutorials/dataobjects/surfaces/WEMExample3_4.png "Moved, rotated, and scaled cube") You can also try the other `So*Dragger` modules in MeVisLab for variations of the `SoTransformerDragger`. @@ -72,45 +72,45 @@ Add a `WEMBulgeEditor` and a `SoWEMBulgeEditor` to your network and connect them Opening the viewer, you can still not edit the object. -We need a lookup table (LUT) to interact with the WEM. Add a `WEMGenerateStatistics` between the WEMInitialize and the WEMBulgeEditor. The module `WEMGenerateStatistics` generates node, edge, and face statistics of a WEM and stores the information in the WEM's Primitive Value Lists. +We need a lookup table (LUT) to interact with the WEM. Add a `WEMGenerateStatistics` between the `WEMInitialize` and the `WEMBulgeEditor`. The module `WEMGenerateStatistics` generates node, edge, and face statistics of a WEM and stores the information in the WEM's Primitive Value Lists. {{}} More information about Primitive Value Lists (PVL) can be found in [Surface Example 5](tutorials/dataobjects/surfaces/surfaceexample5). {{}} -Check *New node PVL* and set *New PVL Name* to *myPVL*. +Check New node PVL and set New PVL Name to *myPVL*. ![WEMGenerateStatistics](images/tutorials/dataobjects/surfaces/WEMExample3_7.png "WEMGenerateStatistics") -In the `WEMBulgeEditor`, set *PVL Used as LUT Values* to previously generated *myPVL*. +In the `WEMBulgeEditor`, set PVL Used as LUT Values to the previously generated *myPVL*. ![WEMBulgeEditor PVL](images/tutorials/dataobjects/surfaces/WEMExample3_8.png "WEMBulgeEditor PVL") -Add a `SoLUTEditor` and connect it to `SoWEMRenderer`. You also have to connect the `WEMGenerateStatistics` to the `SoWEMRenderer`. Set `SoWEMRenderer` *Color Mode* to *Lut Values* and select *PVL Used as LUT Values* to *myPVL*. +Add a `SoLUTEditor` and connect it to `SoWEMRenderer`. You also have to connect the `WEMGenerateStatistics` to the `SoWEMRenderer`. Set `SoWEMRenderer`'s Color Mode to *Lut Values* and set PVL Used as LUT Values to *myPVL*. -![Final Network](images/tutorials/dataobjects/surfaces/WEMExample3_10.png "Final Network") +![Final network](images/tutorials/dataobjects/surfaces/WEMExample3_10.png "Final network") -Open the panel of the `SoLUTEditor`. Configure *New Range Min* as -1 and *New Range Max* as 1 in *Range* tab. Apply the new range. Define the LUT as seen below in *Editor* tab. +Open the panel of the `SoLUTEditor`. Configure New Range Min as *-1* and New Range Max as *1* in the *Range* tab. Apply the new range. Define the LUT as seen below in *Editor* tab. ![SoLUTEditor](images/tutorials/dataobjects/surfaces/WEMExample3_9.png "SoLUTEditor") -Now, your Primitive Value List is used to colorize the affected region for your tansformations. You can see the region by the color on hovering the mouse over the WEM. +Now, your PVL is used to colorize the affected region for your transformations. You can see the region by the color on hovering the mouse over the WEM. -![Affected region colored](images/tutorials/dataobjects/surfaces/Affected_Region.png "Affected region colored") +![Affected region colored: preview](images/tutorials/dataobjects/surfaces/Affected_Region.png "Affected region colored: preview") -The size of the region can be changed via {{}} and mouse wheel {{< mousebutton "middle" >}}. Make sure that the *Influence Radius* in `WEMBulgeEditor` is larger than 0. +The size of the region can be changed via {{}} and mouse wheel {{< mousebutton "middle" >}}. Make sure that the Influence Radius in `WEMBulgeEditor` is larger than *0*. {{}} You need to change the active tool on the right side of the `SoExaminerViewer`. Use the *Pick Mode* for applying transformations and the *View Mode* for adjusting the camera. {{}} -![Modify WEM](images/tutorials/dataobjects/surfaces/Modify.png "Modify WEM") +![Bulged WEM](images/tutorials/dataobjects/surfaces/Modify.png "Bulged WEM") {{< networkfile "examples/data_objects/surface_objects/example3/WEMExample3b.mlab" >}} -A much more complex example using medical images and allowing to modify in 3D and on 2D slices can be seen by opening the example network of the `WEMBulgeEditor`. +A much more complex example using medical images and allowing to modify in 3D and on 2D slices can be seen by opening the example network of the `WEMBulgeEditor`. In this network, you can bulge in 2D and in 3D. -![WEMBulgeEditor Example Network](images/tutorials/dataobjects/surfaces/WEMExample3_11.png "WEMBulgeEditor Example Network") +![WEMBulgeEditor example network](images/tutorials/dataobjects/surfaces/WEMExample3_11.png "WEMBulgeEditor example network") {{}} For other interaction possibilities, you can play around with the example networks of the modules `SoCSODrawOnSurface`, `SoVolumeCutting` and `WEMExtrude`. @@ -118,6 +118,6 @@ For other interaction possibilities, you can play around with the example networ ## Summary * MeVisLab provides multiple options to interact with 3D surfaces. -* Modules of the `So\*Dragger` family allow to scale, rotate, and translate a WEM. +* Modules of the `So\*Dragger` family allow to scale, rotate, and translate a visualization of a WEM. * You can always use a `SoCoordinateSystem` to see the current world coordinates. -* The `WEMBulgeEditor` allows you to interactively modify a WEM via mouse. +* The `WEMBulgeEditor` allows you to interactively modify a WEM via mouse {{< mousebutton "left" >}}. diff --git a/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample4.md b/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample4.md index 40356991e..5723cbf58 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample4.md +++ b/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample4.md @@ -8,7 +8,7 @@ tags: ["Beginner", "Tutorial", "Data Objects", "3D", "Surfaces", "Meshes", "WEM" menu: main: identifier: "surfaceexample4" - title: "Example for Implementing WEM Translations Via Mouse Interaction" + title: "Example for Implementing WEM Translations via Mouse Interaction" weight: 720 parent: "surfaces" --- @@ -23,88 +23,88 @@ In this example, we like to interactively move WEMs using `SoDragger` modules in ### Develop Your Network ### Interactively Translating Objects in 3D Using SoDragger Modules -Add and connect the following modules as shown. On the panel of the module `WEMInitialize`, select the *Model* *Octasphere*. After that, open the viewer `SoExaminerViewer` and make sure to select the *Interaction Mode*. Now, you are able to click on the presented *Octasphere* and move it alongside one axis. The following modules are involved in the interactions: +Add and connect the following modules as shown. On the panel of the module `WEMInitialize`, select the Model *Octasphere*. After that, open the viewer `SoExaminerViewer` and make sure to select the *Interaction Mode*. Now, you are able to click {{< mousebutton "left" >}} on the presented *Octasphere* and move it alongside one axis. The following modules are involved in the interactions: * `SoMITranslate1Dragger`: This module allows interactive translation of the object alongside one axis. You can select the axis for translation in the panel of the module. * `SoMIDraggerContainer`: This module is responsible for actually changing the translation values of the object. -![Interactive dragging of objects](images/tutorials/dataobjects/surfaces/DO10_01.png "Interactive dragging of objects") +![Initial network](images/tutorials/dataobjects/surfaces/DO10_01.png "Initial network") ### Interactively Translating a WEM Alongside Three Axes We like to be able to interactively move a WEM alongside all three axes. In MeVisLab, there is the module `SoMITranslate2Dragger`, which allows translations alongside two axes, but there is no module that allows object translation in all three directions. Therefore, we will create a network that solves this task. The next steps will show you how you create three planes intersecting the objects. Dragging one plane will drag the object alongside one axis. In addition, these planes will only be visible when hovering over them. #### Creation of Planes Intersecting an Object -We start creating a plane that will allow dragging in x-direction. In order to do that, modify your network as shown: Add the modules `WEMModify` and `SoBackground`, and connect the module `SoCube` to the dragger modules. You can select the translation direction in the panel of `SoMITranslate1Dragger`. +We start creating a plane that will allow dragging in the x-direction. In order to do that, modify your network as shown: Add the modules `WEMModify` and `SoBackground`, and connect the module `SoCube` to the dragger modules. You can select the translation direction in the panel of `SoMITranslate1Dragger`. -![Interactive dragging of objects](images/tutorials/dataobjects/surfaces/DO10_02.png "Interactive dragging of objects") +![Network for dragging alongside one axis](images/tutorials/dataobjects/surfaces/DO10_02.png "Network for dragging alongside one axis") -We will modify the cube to be able to use it as a dragger plane. In order to do this, open the panel of `SoCube` and reduce the *Width* to be 0. This sets a plane in y- and z-direction. +We will modify the cube to be able to use it as a dragger plane. In order to do this, open the panel of `SoCube` and reduce the Width to be *0*. This defines a plane in the y- and z-direction. -![Interactive dragging of objects](images/tutorials/dataobjects/surfaces/DO10_02_1.png "Interactive dragging of objects") +![Parameters for showing a plane](images/tutorials/dataobjects/surfaces/DO10_02_1.png "Parameters for showing a plane") -We want to move the object when dragging the plane. Thus, we need to modify the translation of our object when moving the plane. Open the panels of the modules `WEMModify` and `SoMIDraggerContainer` and draw a parameter connection from one *Translation* vector to the other. +We want to move the object when dragging the plane. Thus, we need to modify the translation of our object when moving the plane. Open the panels of the modules `WEMModify` and `SoMIDraggerContainer` and establish a parameter connection from one Translation vector to the other. -![Interactive dragging of objects](images/tutorials/dataobjects/surfaces/DO10_03.png "Interactive dragging of objects") +![Dragging alongside an axis maps to a translation](images/tutorials/dataobjects/surfaces/DO10_03.png "Dragging alongside an axis maps to a translation") -As a next step, we want to adapt the size of the plane to the size of the object we have. Add the modules `WEMInfo` and `DecomposeVector3` to your workspace and open their panels. The module `WEMInfo` presents information about the given WEM, for example, its position and size. The module `DecomposeVector3` splits a 3D vector into its components. Now, draw a parameter connection from *Size* of `WEMInfo` to the vector in `DecomposeVector3`. As a next step, open the panel of `SoCube` and draw parameter connections from the fields Y and Z of `DecomposeVector3` to Height and Depth of `SoCube`. Now, the size of the plane adapts to the size of the object. +As a next step, we want to adapt the size of the plane to the size of the object we have. Add the modules `WEMInfo` and `DecomposeVector3` to your workspace and open their panels. The module `WEMInfo` presents information about the given WEM, for example, its position and size. The module `DecomposeVector3` splits a 3D vector into its components. Now, establish a parameter connection from Size of `WEMInfo` to the vector in `DecomposeVector3`. As a next step, open the panel of `SoCube` and establish parameter connections from the fields Y and Z of `DecomposeVector3` to Height and Depth of `SoCube`. Now, the size of the plane adapts to the size of the object. -![Interactive dragging of objects](images/tutorials/dataobjects/surfaces/DO10_04.png "Interactive dragging of objects") +![Dynamic size of the plane](images/tutorials/dataobjects/surfaces/DO10_04.png "Dynamic size of the plane") The result can be seen in the next image. You can now select the plane in the *Interaction Mode* of the module `SoExaminerViewer` and move the plane together with the object alongside the x-axis. -![Interactive dragging of objects](images/tutorials/dataobjects/surfaces/DO10_05.png "Interactive dragging of objects") +![Network for dragging alongside one axis](images/tutorials/dataobjects/surfaces/DO10_05.png "Network for dragging alongside one axis") #### Modifying the Appearance of the Plane -For changing the visualization of the dragger plane, add the modules `SoGroup`, `SoSwitch`, and `SoMaterial` to your network and connect them as shown. In addition, group all modules together that are responsible for the translation in the x-direction. +For changing the visualization of the dragger plane, add the modules `SoGroup`, `SoSwitch`, and `SoMaterial` to your network and connect them as shown. In addition, group all modules together that are responsible for the translation in the x-direction. -![Interactive dragging of objects](images/tutorials/dataobjects/surfaces/DO10_06.png "Interactive dragging of objects") +![Different materials and grouping of modules](images/tutorials/dataobjects/surfaces/DO10_06.png "Different materials and grouping of modules") -We want to switch the visualization of the plane dependent on the mouse position in the viewer. In other words, when hovering over the plane, the plane should be visible, when the mouse is in another position and the possibility to drag the object is not given, the plane should be invisible. We use the module `SoMaterial` to edit the appearance of the plane. Open the panel of the module `SoMITranslate1Dragger`. The box of the field Highlighted is ticked when the mouse hovers over the plane. Thus, we can use the field's status to switch between different presentations of the plane. In order to do this, open the panel of `SoSwitch` and draw a parameter connection from Highlighted of `SoMITranslate1Dragger` to Which Child of `SoSwitch`. +We want to switch the visualization of the plane depending on the mouse position in the viewer. In other words, when hovering over the plane, the plane should be visible, and when the mouse is in another position and the possibility to drag the object is not given, the plane should be invisible. We use the module `SoMaterial` to edit the appearance of the plane. Open the panel of the module `SoMITranslate1Dragger`. The checkbox of the field Highlighted is ticked when the mouse hovers over the plane. Thus, we can use the field's status to switch between different presentations of the plane. In order to do this, open the panel of `SoSwitch` and establish a parameter connection from Highlighted of `SoMITranslate1Dragger` to Which Child of `SoSwitch`. -![Interactive dragging of objects](images/tutorials/dataobjects/surfaces/DO10_06_02.png "Interactive dragging of objects") +![Controling the visibility of the plane](images/tutorials/dataobjects/surfaces/DO10_06_02.png "Controling the visibility of the plane") -Open the panels of the modules `SoMaterial`. Change the *Transparency* of the first `SoMaterial` module to make the plane invisible when not hovering over the plane. Furthermore, we changed the *Diffuse Color* of the module `SoMaterial1` to red, so that the plane appears in red when hovering over it. +Open the panels of the modules `SoMaterial`. Change the Transparency of the first `SoMaterial` module to make the plane invisible when not hovering over the plane. Furthermore, we changed the Diffuse Color of the module `SoMaterial1` to red, so that the plane appears in red when hovering over it. -![Interactive dragging of objects](images/tutorials/dataobjects/surfaces/DO10_07.png "Interactive dragging of objects") +![Setting visual parameters for the plane](images/tutorials/dataobjects/surfaces/DO10_07.png "Setting visual parameters for the plane") When hovering over the plane, the plane becomes visible and the option to move the object alongside the x-axis is given. When you do not hover over the plane, the plane is invisible. -![Interactive dragging of objects](images/tutorials/dataobjects/surfaces/DO10_08.png "Interactive dragging of objects") +![Showing the plane in red](images/tutorials/dataobjects/surfaces/DO10_08.png "Showing the plane in red") #### Interactive Object Translation in Three Dimensions We do not only want to move the object in one direction, we like to be able to do interactive object translations in all three dimensions. For this, copy the modules responsible for the translation in one direction and change the properties to enable translations in other directions. -We need to change the size of `SoCube1` and `SoCube2` to form planes that cover surfaces in x- and z-, as well as x- and y-directions. To do that, draw the respective parameter connections from `DecomposeVector3` to the fields of the modules `SoCube`. In addition, we need to adapt the field Direction in the panels of the modules `SoMITranslate1Dragger`. +We need to change the size of `SoCube1` and `SoCube2` to form planes that cover surfaces in x- and z-, as well as x- and y-directions. To do that, establish the respective parameter connections from `DecomposeVector3` to the fields of the modules `SoCube`. In addition, we need to adapt the field Direction in the panels of the modules `SoMITranslate1Dragger`. -![Interactive dragging of objects](images/tutorials/dataobjects/surfaces/DO10_09.png "Interactive dragging of objects") +![Dragging in all three cardinal directions](images/tutorials/dataobjects/surfaces/DO10_09.png "Dragging in all three cardinal directions") -Change width, height, and depth of the three cubes, so that each of them represents one plane. The values need to be set to (0, 2, 2), (2, 0, 2), and (2, 2, 0). +Change width, height, and depth of the three cubes, so that each of them represents one plane. The values need to be set to *(0, 2, 2)*, *(2, 0, 2)*, and *(2, 2, 0)*. -As a next step, we like to make sure that all planes always intersect the object, even though the object is moved. To do this, we need to synchronize the field Translation of all `SoMIDraggerContainer` modules and the module `WEMModify`. Draw parameter connections from one Translation field to the next, as shown below. +As a next step, we like to make sure that all planes always intersect the object, even though the object is moved. To do this, we need to synchronize the field Translation of all `SoMIDraggerContainer` modules and the module `WEMModify`. Establish parameter connections from one Translation field to the next, as shown below. -![Interactive dragging of objects](images/tutorials/dataobjects/surfaces/DO10_10.png "Interactive dragging of objects") +![Setting the initial translation](images/tutorials/dataobjects/surfaces/DO10_10.png "Setting the initial translation") We like to close the loop, so that a change in one field Translation causes a change in all the other Translation fields. To do this, we need to include the module `SyncVector`. The module `SyncVector` avoids an infinite processing loop causing a permanent update of all fields Translation. -Add the module `SyncVector` to your workspace and open its panel. Draw a parameter connection from the field Translation of the module `SoMIDraggerContainer2` to *Vector1* of `SyncVector`. The field Vector1 is automatically synchronized to the field Vector2. Now, connect the field Vector2 to the field Translate of the module `WEMModify`. Your synchronization network is now established. +Add the module `SyncVector` to your workspace and open its panel. Establish a parameter connection from the field Translation of the module `SoMIDraggerContainer2` to Vector1 of `SyncVector`. The field Vector1 is automatically synchronized to the field Vector2. Now, connect the field Vector2 to the field Translate of the module `WEMModify`. Your synchronization network is now established. -![Interactive dragging of objects](images/tutorials/dataobjects/surfaces/DO10_11.png "Interactive dragging of objects") +![Avoiding an infinite processing loop](images/tutorials/dataobjects/surfaces/DO10_11.png "Avoiding an infinite processing loop") -To enable transformations in all directions, we need to connect the modules `SoMIDraggerContainer` to the viewer. First, connect the modules to `SoGroup`, after that connect `SoGroup` to `SoExaminerViewr`. +To enable transformations in all directions, we need to connect the modules `SoMIDraggerContainer` to the viewer. First, connect the modules to `SoGroup`, after that, connect `SoGroup` to `SoExaminerViewr`. -![Interactive dragging of objects](images/tutorials/dataobjects/surfaces/DO10_12.png "Interactive dragging of objects") +![All draggers are connected](images/tutorials/dataobjects/surfaces/DO10_12.png "All draggers are connected") -As a next step, we like to enlarge the planes to make them exceed the object. For that, add the module `CalculateVectorFromVectors` to your network. Open its panel and connect the field Size of `WEMInfo` to Vector 1. We like to enlarge the size by one, so we add the vector (1, 1, 1), by editing the field Vector 2. Now, connect the Result to the field V of the module `DecomposeVector3`. +As a next step, we like to enlarge the planes to make them exceed the object. For that, add the module `CalculateVectorFromVectors` to your network. Open its panel and connect the field Size of `WEMInfo` to Vector1. We like to enlarge the size by one, so we add the vector *(1, 1, 1)*, by editing the field Vector2. Now, connect the Result to the field V of the module `DecomposeVector3`. -![Interactive dragging of objects](images/tutorials/dataobjects/surfaces/DO10_13.png "Interactive dragging of objects") +![Enlarging the initial size of the planes](images/tutorials/dataobjects/surfaces/DO10_13.png "Enlarging the initial size of the planes") At last, we can condense all the modules enabling the transformation into one local macro module. For that, group all the modules together and convert the group into a macro module as shown in [Chapter I: Basic Mechanisms](tutorials/basicmechanisms#TutorialMacroModules). -![Interactive dragging of objects](images/tutorials/dataobjects/surfaces/DO10_14.png "Interactive dragging of objects") +![All dragger modules are selected](images/tutorials/dataobjects/surfaces/DO10_14.png "All dragger modules are selected") -The result can be seen in the next image. This module can now be used for interactive 3D transformations for all kinds of WEMs. +The result can be seen in the next image. This module can now be used for interactive 3D translations for all kinds of WEMs. -![Interactive dragging of objects](images/tutorials/dataobjects/surfaces/DO10_15.png "Interactive dragging of objects") +![Final network](images/tutorials/dataobjects/surfaces/DO10_15.png "Final network") ## Summary * A family of `SoDragger` modules is available that can be used to interactively modify Open Inventor objects. diff --git a/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample5.md b/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample5.md index 183d89124..dfa22483f 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample5.md +++ b/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample5.md @@ -18,19 +18,19 @@ menu: {{< youtube "Rap1RY6l5Cc">}} ## Introduction -WEMs do not only contain the coordinates of nodes and surfaces, they can also contain additional information. That information is stored in so-called *Primitive Value Lists* (PVLs). Every node, every surface, and every edge can contain such a list. In these lists, you can, for example, store the color of the node or specific patient information. This information can be used for visualization or for further statistical analysis. +WEMs do not only contain the coordinates of nodes, they can also contain additional information. That information is stored in so-called *Primitive Value Lists* (PVLs). Every node, every edge, and every faces can contain such a list. In these lists, you can, for example, store the color of the node or specific patient information. This information can be used for visualization or for further statistical analysis. In this example we like to use PVLs to color-code and visualize the distance between two WEMs. ## Steps to Do ### Develop Your Network -We start our network by initializing two WEMs using `WEMInitialize`. We chose an *Octasphere* and a resized *Cube*. Use the modules `SoWEMRenderer`, `SoExaminerViewer`, and `SoBackground` to visualize the WEMs. +We start our network by initializing two WEMs using `WEMInitialize`. We choose an *Octasphere* and a resized *Cube*. Use the modules `SoWEMRenderer`, `SoExaminerViewer`, and `SoBackground` to visualize the WEMs. ![WEMInitialize](images/tutorials/dataobjects/surfaces/DO12_01.png "WEMInitialize") #### Subdividing WEM Edges -As a next step, add and connect two modules `WEMSubdivide` to further divide edges and surfaces. With this step we increase the node density to have an accurate distance measurement. +As a next step, add and connect two modules `WEMSubdivide` to further divide edges and surfaces. With this step, we increase the node density to have an accurate distance measurement. ![WEMSubdivide](images/tutorials/dataobjects/surfaces/DO12_02.png "WEMSubdivide") @@ -52,26 +52,26 @@ What can we do with this information? We can use the calculated distances, store ![SoWEMRenderer](images/tutorials/dataobjects/surfaces/DO12_07.png "SoWEMRenderer") -To translate the LUT values from the PVLs into color, open the panel of `SoLUTEditor` and select the tab *Range*. We need to define the value range we like to work with. As the distance and thus the PVL value is expected to be 0 when the surfaces of both WEMs meet, we set the *New Range Min* to 0. As the size of the WEMs does not exceed 3, we set the *New Range Max* to 3. After that, press *Apply New Range*. +To translate the LUT values from the PVLs into color, open the panel of `SoLUTEditor` and select the tab *Range*. We need to define the value range we like to work with. As the distance and thus the PVL value is expected to be 0 when the surfaces of both WEMs meet, we set the New Range Min to *0*. As the size of the WEMs does not exceed 3, we set the New Range Max to *3*. After that, press Apply New Range. -![SoLUTEditor](images/tutorials/dataobjects/surfaces/DO12_08.png "SoLUTEditor") +![Setting a new range in the SoLUTEditor](images/tutorials/dataobjects/surfaces/DO12_08.png "Setting a new range in the SoLUTEditor") Our goal is to colorize faces of the *Octasphere* in red if they are close to or even intersect the cubic WEM. And we like to colorize faces of the *Octasphere* in green if these faces are far away from the cubic WEM. -Open the tab *Editor* of the panel of `SoLUTEditor`. This tab allows to interactively select a color for each PVL value. Select the color point on the left side. Its Position value is supposed to be *0*, so we like to set the color to *red* in order to color-code small distances between the WEMs in red. In addition to that, increase the Opacity of this color point. Next, select the right color point. Its Position is supposed to be *3* and thus equals the value of the field New Range Max. As these color points colorize large distances between WEMs, set the color to *green*. You can add new color points by clicking on the colorized bar in the panel. Select, for example, the color *yellow* for a color point in the middle. Select and shift the color points to get the desired visualization. +Open the tab *Editor* of the panel of `SoLUTEditor`. This tab allows to interactively select a color for each PVL value. Select the color point on the left side. Its Position value is supposed to be *0*, so we like to set the color to *red* in order to color-code small distances between the WEMs in red. In addition to that, increase the Opacity of this color point. Next, select the right color point. Its Position is supposed to be *3* and thus equals the value of the field New Range Max. As these color points colorize large distances between WEMs, set the color to *green*. You can add new color points by clicking {{< mousebutton "left" >}} on the colorized bar in the panel. Select, for example, the color *yellow* for a color point in the middle. Select and shift the color points to get the desired visualization. ![Changing the LUT](images/tutorials/dataobjects/surfaces/DO12_09.png "Changing the LUT") Add the module `WEMModify` to your workspace and connect the module as shown. If you now shift the WEM using `WEMModify`, you can see that the colorization adapts. -![WEMModify](images/tutorials/dataobjects/surfaces/DO12_10.png "WEMModify") +![Moving one WEM with WEMModify](images/tutorials/dataobjects/surfaces/DO12_10.png "Moving one WEM with WEMModify") ### Interactive Shift of WEMs -As a next step, we like to implement the interactive shift of the WEM. Add the modules `SoTranslateDragger1` and `SyncVector`. Connect all translation vectors: Draw connections from the field Translate of `SoTranslateDragger1` to Vector1 of `SyncVector`, from Vector2 of `SyncVector` to Translate of `WEMModify`, and at last from Translate of `WEMModify` to Translate of `SoTranslateDragger1`. +As a next step, we like to implement the interactive shift of the WEM. Add the modules `SoTranslateDragger1` and `SyncVector`. Connect all translation vectors: Establish connections from the field Translate of `SoTranslateDragger1` to Vector1 of `SyncVector`, from Vector2 of `SyncVector` to Translate of `WEMModify`, and at last from Translate of `WEMModify` to Translate of `SoTranslateDragger1`. You can now interactively drag the WEM inside the viewer. -![Dragging the WEM](images/tutorials/dataobjects/surfaces/DO12_11.png "Dragging the WEM") +![Network for dragging one WEM](images/tutorials/dataobjects/surfaces/DO12_11.png "Network for dragging one WEM") At last, exchange the `WEMInitialize` module showing the cube with `WEMLoad` and load *venus.off*. You can decrease the Face Alpha in the panel of `SoWEMRenderer1` to make that WEM transparent. diff --git a/mevislab.github.io/content/tutorials/image_processing.md b/mevislab.github.io/content/tutorials/image_processing.md index 2ffae0046..83a2b607f 100644 --- a/mevislab.github.io/content/tutorials/image_processing.md +++ b/mevislab.github.io/content/tutorials/image_processing.md @@ -23,6 +23,4 @@ MeVisLab provides multiple modules for image processing tasks, such as: * Arithmetics * Statistics -For details about Image Processing in MeVisLab, see the {{< docuLinks "/Resources/Documentation/Publish/SDK/GettingStarted/ch06.html#FOImageProcessing" "MeVisLab Documentation">}} - In this chapter, you will find some examples for different types of image processing in MeVisLab. diff --git a/mevislab.github.io/content/tutorials/image_processing/cpp_1.md b/mevislab.github.io/content/tutorials/image_processing/cpp_1.md index b6207e621..d46aba3a0 100644 --- a/mevislab.github.io/content/tutorials/image_processing/cpp_1.md +++ b/mevislab.github.io/content/tutorials/image_processing/cpp_1.md @@ -16,7 +16,7 @@ menu: # Example 1: Creating a New ML Module for Adding a Value to Each Voxel ## Precondition -Make sure to have [cmake](https://cmake.org/download) installed. This example has been created using CMake Legacy Release (3.31.11). +Make sure to have [CMake](https://cmake.org/download) installed. This example has been created using CMake Legacy Release (3.31.11). ## Introduction In this example, we develop our own C++ ML module, which adds a constant value to each voxel of the given input image. @@ -25,31 +25,31 @@ In this example, we develop our own C++ ML module, which adds a constant value t ### Create a New ML Module Before creating the module, make sure to have your own user package available. See [Package creation](tutorials/basicmechanisms/macromodules/package/) for details about Packages. -Use the *Project Wizard* via the menu entry {{< menuitem "File" "Run Project Wizard ..." >}} to create a new ML module. Select *ML Module* and click *Run Wizard*. +Use the *Project Wizard* via the menu entry {{< menuitem "File" "Run Project Wizard ..." >}} to create a new ML module. Select *ML Module* and click Run Wizard. -![ML Module Project Wizard](images/tutorials/image_processing/cpp/cpp1_1.png "ML Module Project Wizard") +![ML module Project Wizard](images/tutorials/image_processing/cpp/cpp1_1.png "ML module Project Wizard") Enter properties of your new module and give your module the name `SimpleAdd`. Make sure to select your user package and name your project *SimpleAdd*. -![ML Module Properties](images/tutorials/image_processing/cpp/cpp1_2.png "ML Module Properties") +![ML module properties](images/tutorials/image_processing/cpp/cpp1_2.png "ML module properties") - Click *Next*. The next screen of the Wizard allows you to define the inputs and outputs of your module. Select *Module Type* as *New style ML Module*, make sure to have one input and one output and leave the rest of the settings unchanged. + Click Next >. The next screen of the Wizard allows you to define the inputs and outputs of your module. Select Module Type as *New style ML Module*, make sure to have one input and one output and leave the rest of the settings unchanged. -![ML Module Properties](images/tutorials/image_processing/cpp/cpp1_3.png "ML Module Properties") +![ML module properties](images/tutorials/image_processing/cpp/cpp1_3.png "ML module properties") -Click *Next*. On the next screen, we can define some additional properties of our module. Select *Add activateAttachments()*, unselect *Add configuration hints* and select *Add MDL window with fields*. +Click Next >. On the next screen, we can define some additional properties of our module. Select Add activateAttachments(), unselect Add configuration hints, and select Add MDL window with fields. -![ML Module Additional Properties](images/tutorials/image_processing/cpp/cpp1_4.png "ML Module Additional Properties") +![ML module additional properties](images/tutorials/image_processing/cpp/cpp1_4.png "ML module additional properties") -Click *Next*. The Module Field Interface allows you to define additional fields for the module. More fields can be added later but this is the easiest way to add fields. Click *New* to create a new field, then enter the following: +Click Next >. The *Module Field Interface* allows you to define additional fields for the module. More fields can be added later but this is the easiest way to add fields. Click New to create a new field, then enter the following: * **Field Name:** constantValue * **Field Type:** Double * **Field Comment:** This constant value is added to each voxel. * **Field Value:** 0. -![ML Module Field Interface](images/tutorials/image_processing/cpp/cpp1_5.png "ML Module Field Interface") +![Module field interface](images/tutorials/image_processing/cpp/cpp1_5.png "Module field interface") -Click *Create*. You see a screen showing the results of the module creation process. In the case the Wizard finished succesfully, you can close the window. Additionally, an explorer window opens showing the created folder containing your sources and the *CMakeLists.txt*. +Click Create. You see a screen showing the results of the module creation process. In the case the Wizard finished succesfully, you can close the window. Additionally, an explorer window opens showing the created folder containing your sources and the *CMakeLists.txt*. The foundation of the module has been created with the Wizard. From here on, the programming starts. @@ -60,7 +60,7 @@ The Project Wizard creates a *CMakeLists.txt* file that describes the typical pr Just make sure that the MLAB_ROOT environment variable is set on your system and points to the packages directory of your MeVisLab installation, because this is used to resolve the reference to the 'MeVisLab' project. -Open a command line and change to your current module directory (the directory containing your *CMakeLists.txt* file). Enter **cmake . -G "Visual Studio 17"**. After execution, a lot of files are generated by CMake. +Open a command line and change to your current module directory (the directory containing your *CMakeLists.txt* file). Enter cmake . -G "Visual Studio 17". After execution, a lot of files are generated by CMake. For further documentation about our use of CMake, see: [CMake for MeVisLab - Documentation](https://mevislabdownloads.mevis.de/docs/current/MeVisLab/Resources/Documentation/Publish/SDK/CMakeManual/#mainBook). @@ -111,7 +111,7 @@ void SimpleAdd::calculateOutputImageProperties(int /*outputIndex*/, PagedImage* {{}} {{}} -*outputIndex* is the index number of the output connector. It is commented out in this example, because we only defined one output. In the case of more than one outputs, uncomment this parameter. +*outputIndex* is the index number of the output connector. It is commented in this example, because we only defined one output. In the case of more than one outputs, uncomment this parameter. {{}} #### Implementing *typedCalculateOutputSubImage* @@ -139,7 +139,7 @@ Then, change the inner line of the loop so that the constant value is added to t Compile the project in the development environment. Make sure to select a *Release* build. ### Use Your Module in MeVisLab -Your compiled **.dll* is available in your project directory under *Sources/lib*. In order to use it in MeVisLab, it needs to be copied to the *lib* folder of your user package. +Your compiled *.dll* is available in your project directory under *Sources/lib*. In order to use it in MeVisLab, it needs to be copied to the *lib* folder of your user package. This only works in a post-build step. @@ -147,7 +147,7 @@ If the environment variable *MLAB_AUTOMATIC_POSTBUILD_COPY* is set, the newly co For testing purposes, you can use a `LocalImage` module and two `View2D` modules. Connect the `SimpleAdd` module to the second `View2D` and change the Constant Value field. -![Testing Network](images/tutorials/image_processing/cpp/cpp1_6.png "Testing Network") +![Testing network](images/tutorials/image_processing/cpp/cpp1_6.png "Testing network") The output image of the module `SimpleAdd` is automatically recalculated on changing the field Constant Value. This is already implemented in the generated code of the file below: diff --git a/mevislab.github.io/content/tutorials/image_processing/cpp_development.md b/mevislab.github.io/content/tutorials/image_processing/cpp_development.md index e1e5641e8..37a47ebbb 100644 --- a/mevislab.github.io/content/tutorials/image_processing/cpp_development.md +++ b/mevislab.github.io/content/tutorials/image_processing/cpp_development.md @@ -15,44 +15,44 @@ menu: # C++ Module Development ## Introduction -The development of your own C++ modules can be done by ML modules and by Open Inventor modules. +The development of your own C++ modules can be done by implementing ML modules or Open Inventor modules. {{}} Make sure to use a compiler that is compatible with your currently installed MeVisLab version. {{}} ### ML Modules on the C++ Level -* Image processing modules are objects derived from class Module defined in the ML library and therefore are also called ML modules. -* Image inputs and outputs are connectors to objects of class PagedImage, which are defined in the ML library. -* Inputs and outputs for abstract data structures are connectors to pointers of objects derived from class Base and are called Base objects. +* Image processing modules are objects derived from class {{< docuLinks "/Resources/Documentation/Publish/SDK/MLReference/classml_1_1Module.html" "Module">}} defined in the ML library and therefore are also called ML modules. +* Image inputs and outputs are connectors to objects of class {{< docuLinks "/Resources/Documentation/Publish/SDK/MLReference/classml_1_1PagedImage.html" "PagedImage">}}, which are defined in the ML library. +* Inputs and outputs for abstract data structures are connectors to pointers of objects derived from class {{< docuLinks "/Resources/Documentation/Publish/SDK/MLReference/classml_1_1Base.html" "Base">}} and are called Base objects. ### Open Inventor Modules on the C++ Level -* Most Open Inventor modules are objects derived from class SoNode, defined in the Open Inventor library. +* Most Open Inventor modules are objects derived from class {{< docuLinks "/../MeVis/ThirdParty/Documentation/Publish/OpenInventorReference/classSoNode.html" "SoNode">}}, defined in the Open Inventor library. * Open Inventor inputs and outputs are connectors to objects derived from class SoNode, defined in the Open Inventor library. Many Open Inventor modules will return themselves as outputs (“self”). On inputs, they may have connectors to child Open Inventor modules. -* Some Open Inventor modules are objects derived from class SoEngine. They are used for calculations and return their output not via output connectors but via parameter fields. -* Open Inventor modules may also have input and output connectors to Base objects and Image objects. +* Some Open Inventor modules are objects derived from class {{< docuLinks "/../MeVis/ThirdParty/Documentation/Publish/OpenInventorReference/classSoEngine.html" "SoEngine">}}. They are used for calculations and return their output not via output connectors but via parameter fields. +* Open Inventor modules may also have input and output connectors to Base objects and image objects. * All standard Open Inventor nodes defined in the Open Inventor library are available in MeVisLab as Open Inventor modules. This chapter describes some examples for developing your own ML and Open Inventor modules. ## Some Tips for Module Design ### Macro Modules or C++ Modules? -In [Example 2: Macro Modules](tutorials/basicmechanisms/macromodules/), we already described Macro Modules and how to create them yourself. +In [Example 2: Macro Modules](tutorials/basicmechanisms/macromodules/), we already described macro modules and how to create them yourself. **Advantages of macros:** -* Macros are useful for creating a layer of abstraction by hierarchical grouping of existing modules. -* Scripts can be edited on the fly: +* Macro modules are useful for creating a layer of abstraction by hierarchical grouping of existing modules. +* Scripts can be edited on-the-fly: * no compilation and reload of the module database necessary * scripting possible on the module or network level * scripting supported by the Scripting Assistant View (basically a recorder for actions performed on the network) **Conclusion:** -* For rapid prototyping based on existing image processing algorithms, use macros. +* For rapid prototyping based on existing image processing algorithms, use macro modules. * For implementing new image processing, write new ML or Open Inventor modules. ### Combining Functionalities It is possible to have ML and Open Inventor connectors in the same module. Two cases are possible: -* Type 1: **ML -> visualization:** Image data or properties are displayed by a visualization module. Usually a SoSFXVImage field gets random access to an ML image by *getTile()*. Examples: `SoView2D`, `GlobalStatistics`. +* Type 1: **ML -> visualization:** Image data or properties are displayed by a visualization module. Usually a SoSFXVImage field gets random access to an ML image by getTile(). Examples: `SoView2D`, `GlobalStatistics`. * Type 2: **visualization -> ML:** Modules generate an ML image from an Open Inventor scene. Examples: `VoxelizeInventorScene`, `SoExaminerViewer` (hidden functionality). Generally, however, it is not always a good solution to combine that, as the processes of image processing and image visualization are usually separated. diff --git a/mevislab.github.io/content/tutorials/image_processing/image_processing1.md b/mevislab.github.io/content/tutorials/image_processing/image_processing1.md index 3b118a682..f51c0c622 100644 --- a/mevislab.github.io/content/tutorials/image_processing/image_processing1.md +++ b/mevislab.github.io/content/tutorials/image_processing/image_processing1.md @@ -26,7 +26,7 @@ Add two `LocalImage` modules to your workspace for the input images. Select *$(D In the end, add the `Arithmetic2` module and connect them as seen below. -![Example Network](images/tutorials/image_processing/network_example1.png "Example Network") +![Example network](images/tutorials/image_processing/network_example1.png "Example network") Your `SynchroView2D` shows two images. On the left hand side, you can see the original image from your left `LocalImage` module. The right image shows the result of the arithmetic operation performed by the `Arithmetic2` module on the two input images. diff --git a/mevislab.github.io/content/tutorials/image_processing/image_processing2.md b/mevislab.github.io/content/tutorials/image_processing/image_processing2.md index 9065d6a0d..0e176d1f2 100644 --- a/mevislab.github.io/content/tutorials/image_processing/image_processing2.md +++ b/mevislab.github.io/content/tutorials/image_processing/image_processing2.md @@ -29,27 +29,27 @@ Image masking is a very good way to select a defined region where image processi ### Develop Your Network Add a `LocalImage` and a `SynchroView2D` module to your network and connect the modules as seen below. -![Example Network](images/tutorials/image_processing/network_example2a.png "Example Network") +![Example network](images/tutorials/image_processing/network_example2a.png "Example network") Open the Automatic Panel of the `SynchroView2D` module via context menu {{< mousebutton "right" >}} and selecting {{< menuitem "Show Window" "Automatic Panel" >}}. Set the field synchLUTs to *Yes*. ![Synchronize LUTs in SynchroView2D](images/tutorials/image_processing/synchLUTs.png "Synchronize LUTs in SynchroView2D") -Double-click the `SynchroView2D` and change window/level values via right mouse button {{< mousebutton "right" >}}. You can see that the background of your images gets very bright and changes based on the LUT are applied to all voxels of your input image - even on the background. Hovering your mouse over the image(s) shows the current gray value under your cursor in [Hounsfield Unit (HU)](https://en.wikipedia.org/wiki/Hounsfield_scale). +Double-click {{< mousebutton "left" >}} the `SynchroView2D` and change window/level values via right mouse button {{< mousebutton "right" >}}. You can see that the background of your images gets very bright and changes based on the LUT are applied to all voxels of your input image - even on the background. Hovering your mouse over the image(s) shows the current gray value under your cursor in [Hounsfield Unit (HU)](https://en.wikipedia.org/wiki/Hounsfield_scale). ![Without masking the image](images/tutorials/image_processing/SynchroView2D_before.png "Without masking the image") -Hovering the mouse over black background voxels shows a value between 0 and about 60. This means we want to create a mask that only allows modifications on voxels having a gray value larger than 60. +Hovering the mouse over black background voxels shows a value between *0* and about *60*. This means we want to create a mask that only allows modifications on voxels having a gray value larger than *60*. Add a `Mask` and a `Threshold` module to your workspace and connect them as seen below. -![Example Network](images/tutorials/image_processing/network_example2b.png "Example Network") +![Example network: using Mask](images/tutorials/image_processing/network_example2b.png "Example network: using Mask") -Changing the window/level values in your viewer still also changes background voxels. The `Threshold` module still leaves the voxels as is because the threshold value is configured as larger than 0. Open the panels of the modules `Threshold` and `Mask` via double-click {{< mousebutton "left" >}} and set the values as seen below. +Changing the window/level values in your viewer still also changes background voxels. The `Threshold` module still leaves the voxels as is because the threshold value is configured as larger than *0*. Open the panels of the modules `Threshold` and `Mask` via double-click {{< mousebutton "left" >}} and set the values as seen below. {{< imagegallery 2 "images/tutorials/image_processing" "Threshold" "Mask">}} -Now, all voxels having a value lower or equal 60 are set to 0, all others are set to 1. The resulting image from the `Threshold` module is a binary image that can now be used as a mask by the `Mask` module. +Now, all voxels having a value lower or equal *60* are set to *0*, all others are set to *1*. The resulting image from the `Threshold` module is a binary image that can now be used as a mask by the `Mask` module. ![Output of the Threshold module](images/tutorials/image_processing/OutputInspector_Threshold.png "Output of the Threshold module") diff --git a/mevislab.github.io/content/tutorials/image_processing/image_processing3.md b/mevislab.github.io/content/tutorials/image_processing/image_processing3.md index 25e75d336..7f686b9bb 100644 --- a/mevislab.github.io/content/tutorials/image_processing/image_processing3.md +++ b/mevislab.github.io/content/tutorials/image_processing/image_processing3.md @@ -27,30 +27,30 @@ In this example, you will segment the brain of an image and show the segmentatio ### Develop Your Network Add a `LocalImage` module to your workspace and select load *$(DemoDataPath)/BrainMultiModal/ProbandT1.dcm*. Add a `View2D` module and connect both as seen below. -![Example Network](images/tutorials/image_processing/network_example3.png "Example Network") +![Example network](images/tutorials/image_processing/network_example3.png "Example network") ### Add the RegionGrowing Module -Add the `RegionGrowing` module and connect the input with the `LocalImage` module. You will see a message *results invalid*. The reason is that a region growing always needs a starting point for getting similar voxels. The output of the module does not show a result in *Output Inspector*. +Add the `RegionGrowing` module and connect the input with the `LocalImage` module. You will see a message *results invalid*. The reason is that a region growing always needs at least one starting point for getting similar voxels. The output of the module does not show a result in the Output Inspector. -![Results Invalid](images/tutorials/image_processing/network_example3a.png "Results Invalid") +![Results invalid](images/tutorials/image_processing/network_example3a.png "Results invalid") Add a `SoView2DMarkerEditor` to your network and connect it with your `RegionGrowing` and with the `View2D`. Clicking into your viewer now creates markers that can be used for the region growing. ![SoView2DMarkerEditor](images/tutorials/image_processing/SoView2DMarkerEditor.png "SoView2DMarkerEditor") -The region growing starts on manually clicking *Update* or automatically if *Update Mode* is set to *Auto-Update*. We recommend to set update mode to automatic update. Additionally, you should set the *Neighborhood Relation* to *3D-6-Neighborhood (x,y,z)*, because then your segmentation will also be performed in the z-direction. +The region growing starts on manually clicking Update {{< mousebutton "left" >}} or automatically if Update Mode is set to *Auto-Update*. We recommend to set update mode to automatic update. Additionally, you should set the Neighborhood Relation to *3D-6-Neighborhood (x,y,z)*, because then your segmentation will also be performed in the z-direction. -Set *Threshold Computation* to *Automatic* and define *Interval Size* as 1.600 % for relative, automatic threshold generation. +Set Threshold Computation to *Automatic* and define *Interval Size* as 1.600 % for relative, automatic threshold generation. {{}} For more information, see {{< docuLinks "/Standard/Documentation/Publish/ModuleReference/RegionGrowing.html" "MeVisLab Module Reference" >}} {{}} -![Auto-Update for RegionGrowing](images/tutorials/image_processing/RegionGrowing_AutoUpdate.png "Auto-Update for RegionGrowing") +![Auto-update for RegionGrowing](images/tutorials/image_processing/RegionGrowing_AutoUpdate.png "Auto-update for RegionGrowing") -Clicking into your image in the `View2D` now already generates a mask containing your segmentation. As you did not connect the output of the `RegionGrowing`, you need to select the output of the module and use the *Output Inspector* to visualize your results. +Clicking into your image in the `View2D` now already generates a mask containing your segmentation. As you did not connect the output of the `RegionGrowing`, you need to select the output of the module and use the Output Inspector to visualize your results. -![Output Inspector Preview](images/tutorials/image_processing/OutputInspector.png "Output Inspector Preview") +![Output Inspector preview](images/tutorials/image_processing/OutputInspector.png "Output Inspector preview") In order to visualize your segmentation mask as an overlay in the `View2D`, you need to add the `SoView2DOverlay` module. Connect it as seen below. @@ -59,7 +59,7 @@ In order to visualize your segmentation mask as an overlay in the `View2D`, you Your segmentation is now shown in the `View2D`. You can change the color and transparency of the overlay via SoView2DOverlay. ### Close Gaps -Scrolling through the slices, you will see that your segmentation is not closed. There are lots of gaps where the gray value of your image differs more than your threshold. You can simply add a `CloseGap` module to resolve this issue. Configure *Filter Mode* as *Binary Dilatation*, *Border Handling* as *Pad Src Fill* and set *KernelZ* to 3. +Scrolling through the slices, you will see that your segmentation is not closed. There are lots of gaps where the gray value of your image differs more than your threshold. You can simply add a `CloseGap` module to resolve this issue. Configure Filter Mode as *Binary Dilatation*, Border Handling as *Pad Src Fill*, and set KernelZ to *3*. The difference before and after closing the gaps can be seen in the Output Inspector. @@ -70,7 +70,7 @@ You can play around with the different settings of the `RegionGrowing` and `Clos ### Visualize 2D and 3D You can now also add a `View3D` to show your segmentation in 3D. Your final result should look similar to this. -![Final Result](images/tutorials/image_processing/network_example3c.png "Final Result") +![Final result](images/tutorials/image_processing/network_example3c.png "Final result") ## Summary * The module `RegionGrowing` allows a very simple segmentation of similar gray values. diff --git a/mevislab.github.io/content/tutorials/image_processing/image_processing4.md b/mevislab.github.io/content/tutorials/image_processing/image_processing4.md index 3535fd6d7..c289dc225 100644 --- a/mevislab.github.io/content/tutorials/image_processing/image_processing4.md +++ b/mevislab.github.io/content/tutorials/image_processing/image_processing4.md @@ -23,32 +23,32 @@ In this example, we load an image and render it as `WEMIsoSurface`. Then, we cre ## Steps to Do ### Develop Your Network -Add a `LocalImage` module to your workspace and select load *$(DemoDataPath)/BrainMultiModal/ProbandT1.dcm*. Add a `WEMIsoSurface`, a `SoWEMRenderer`, a `SoBackground`, and a `SoExaminerViewer` module and connect them as seen below. Make sure to configure the `WEMIsoSurface` to use a *Iso Min. Value* of 420 and a *Voxel Sampling* 1. +Add a `LocalImage` module to your workspace and select load *$(DemoDataPath)/BrainMultiModal/ProbandT1.dcm*. Add a `WEMIsoSurface`, a `SoWEMRenderer`, a `SoBackground`, and a `SoExaminerViewer` module and connect them as seen below. Make sure to configure the `WEMIsoSurface` to use a Iso Min. Value of *420* and a Voxel Sampling of *1*. -![Example Network](images/tutorials/image_processing/network_example4.png "Example Network") +![Example network](images/tutorials/image_processing/network_example4.png "Example network") The `SoExaminerViewer` now shows the head as a three-dimensional rendering. -![SoExaminerViewer](images/tutorials/image_processing/SoExaminerViewer_initial.png "SoExaminerViewer") +![SoExaminerViewer showing a head in 3D](images/tutorials/image_processing/SoExaminerViewer_initial.png "SoExaminerViewer showing a head in 3D") ### Add a 3D Sphere to Your Scene We now want to add a three-dimensional sphere to our scene. Add a `SoMaterial` and a `SoSphere` to your network, connect them to a `SoSeparator` and then to the `SoExaminerViewer`. Set your material to use a *Diffuse Color* red and adapt the size of the sphere to *Radius* 50. -![Example Network](images/tutorials/image_processing/network_example4b.png "Example Network") +![Example network with a sphere](images/tutorials/image_processing/network_example4b.png "Example network with a sphere") The `SoExaminerViewer` now shows the head and the red sphere inside. -![SoExaminerViewer](images/tutorials/image_processing/SoExaminerViewer_sphere.png "SoExaminerViewer") +![SoExaminerViewer shows the head and the sphere in 3D](images/tutorials/image_processing/SoExaminerViewer_sphere.png "SoExaminerViewer shows the head and the sphere in 3D") ### Set Location of Your Sphere In order to define the best possible location of the sphere, we additionally add a `SoTranslation` module and connect it to the `SoSeparator` between the material and the sphere. Define a translation of x=0, y=20 and z=80. -![Example Network](images/tutorials/image_processing/network_example4c.png "Example Network") +![Translated sphere](images/tutorials/image_processing/network_example4c.png "Translated sphere") -### Subtract the Sphere From the Head +### Subtract the Sphere from the Head We now want to subtract the sphere from the head to get a hole. Add another `SoWEMRenderer`, a `WEMLevelSetBoolean`, and a `SoWEMConvertInventor` to the network and connect them to a `SoSwitch` as seen below. The `SoSwitch` also needs to be connected to the `SoWEMRenderer` of the head. Set your `WEMLevelSetBoolean` to use the *Mode* **Difference**. -![Example Network](images/tutorials/image_processing/network_example4d.png "Example Network") +![Network for subtracting a sphere from a head's surface](images/tutorials/image_processing/network_example4d.png "Network for subtracting a sphere from a head's surface") What happens in your network now? @@ -66,4 +66,4 @@ You can now toggle the hole to be shown or not, depending on your setting for th * The module `WEMLevelSetBoolean` allows to subtract or add three-dimensional WEM objects. * The `SoSwitch` can toggle multiple Open Inventor scenes as input. -{{< networkfile "examples/image_processing/example4/Subtract3DObjects.mlab" >}} \ No newline at end of file +{{< networkfile "examples/image_processing/example4/Subtract3DObjects.mlab" >}} diff --git a/mevislab.github.io/content/tutorials/image_processing/image_processing5.md b/mevislab.github.io/content/tutorials/image_processing/image_processing5.md index d0495819d..4cb2f5049 100644 --- a/mevislab.github.io/content/tutorials/image_processing/image_processing5.md +++ b/mevislab.github.io/content/tutorials/image_processing/image_processing5.md @@ -27,9 +27,9 @@ First, we need to develop the network to scroll through the slices. Add a `Local Add the modules `OrthoReformat3`, `Switch`, `SoView2D`, `View2DExtensions`, and `SoRenderArea` and connect them as seen below. -![Example Network](images/tutorials/image_processing/network_example5.png "Example Network") +![Example network](images/tutorials/image_processing/network_example5.png "Example network") -In previous tutorials, we already learned that it is possible to show 2D slices in a `SoRenderArea`. For scrolling through the slices, a `View3DExtensions` module is necessary. In this network, we also have a `OrthoReformat3` module. It allows us to transform the input image (by rotating and/or flipping) into the three main views commonly used: +In previous tutorials, we have already seen that it is possible to show 2D slices in a `SoRenderArea`. For scrolling through the slices, a `View3DExtensions` module is necessary. In this network, we also have a `OrthoReformat3` module. It allows us to transform the input image (by rotating and/or flipping) into the three main views commonly used: * Axial * Coronal * Sagittal @@ -43,7 +43,7 @@ The `SoRenderArea` now shows the 2D images in a view defined by the `Switch`. ### Current 2D Slice in 3D We now want to visualize the slice visible in the 2D images as a 3D plane. Add a `SoGVRDrawOnPlane` and a `SoExaminerViewer` to your workspace and connect them. We should also add a `SoBackground` and a `SoLUTEditor`. The viewer remains empty because no source image is selected to display. Add a `SoGVRVolumeRenderer` and connect it to your viewer and the `LocalImage`. -![Example Network](images/tutorials/image_processing/network_example5b.png "Example Network") +![Network with an additional 3D renderer](images/tutorials/image_processing/network_example5b.png "Network with an additional 3D renderer") A three-dimensional plane of the image is shown. Adapt the LUT as seen below. @@ -51,9 +51,9 @@ A three-dimensional plane of the image is shown. Adapt the LUT as seen below. We now have a single slice of the image in 3D, but the slice is static and cannot be changed. In order to use the currently visible slice from the 2D viewer, we need to create a parameter connection from the `SoView2D` position *Slice as plane* to the `SoGVRDrawOnPlane` plane vector. -![SoView2D Position](images/tutorials/image_processing/SoView2D_Position.png "SoView2D Position") +![SoView2D position](images/tutorials/image_processing/SoView2D_Position.png "SoView2D position") -![SoGVRDrawOnPlane Plane](images/tutorials/image_processing/SoGVRDrawOnPlane_Plane.png "SoGVRDrawOnPlane Plane") +![SoGVRDrawOnPlane plane](images/tutorials/image_processing/SoGVRDrawOnPlane_Plane.png "SoGVRDrawOnPlane plane") Now, the plane representation of the visible slice is synchronized to the plane of the 3D view. Scrolling through your 2D slices changes the plane in 3D. @@ -62,7 +62,7 @@ Now, the plane representation of the visible slice is synchronized to the plane ### Current 2D Slice as Clip Plane in 3D This slice shall now be used as a clip plane in 3D. In order to achieve this, you need another `SoExaminerViewer` and a `SoClipPlane`. Add them to your workspace and connect them as seen below. You can also use the same `SoLUTEditor` and `SoBackground` for the 3D view. Also use the same `SoGVRVolumeRenderer`; the 3D volume does not change. -![Example Network](images/tutorials/image_processing/network_example5c.png "Example Network") +![Example network](images/tutorials/image_processing/network_example5c.png "Example network") Now, your 3D scene shows a three-dimensional volume cut by a plane in the middle. Once again, the clipping is not the same slice as your 2D view shows. @@ -70,11 +70,11 @@ Now, your 3D scene shows a three-dimensional volume cut by a plane in the middle Again, create a parameter connection from the `SoView2D` position *Slice as plane*, but this time to the `SoClipPlane`. -![SoClipPlane Plane](images/tutorials/image_processing/SoClipPlane_Plane.png "SoClipPlane Plane") +![SoClipPlane plane](images/tutorials/image_processing/SoClipPlane_Plane.png "SoClipPlane plane") If you now open all three viewers and scroll through the slices in 2D, the 3D viewers are both synchronized with the current slice. You can even toggle the view in the `Switch` and the plane is adapted automatically. -![Final 3 views](images/tutorials/image_processing/Final3Views.png "Final 3 views") +![Final three views](images/tutorials/image_processing/Final3Views.png "Final three views") ## Summary * The module `OrthoReformat3` transforms input images to the three viewing directions: coronal, axial, and sagittal. @@ -82,4 +82,4 @@ If you now open all three viewers and scroll through the slices in 2D, the 3D vi * The `SoGVRDrawOnPlane` module renders a single slice as a three-dimensional plane. * Three-dimensional clip planes on volumes can be created by using a `SoClipPlane` module. -{{< networkfile "examples/image_processing/example5/ImageProcessingExample5.mlab" >}} \ No newline at end of file +{{< networkfile "examples/image_processing/example5/ImageProcessingExample5.mlab" >}} diff --git a/mevislab.github.io/content/tutorials/image_processing/image_processing6.md b/mevislab.github.io/content/tutorials/image_processing/image_processing6.md index e15ad7dd8..88d286f30 100644 --- a/mevislab.github.io/content/tutorials/image_processing/image_processing6.md +++ b/mevislab.github.io/content/tutorials/image_processing/image_processing6.md @@ -25,13 +25,13 @@ This tutorial explains how to load and visualize DICOM RT (Radiotherapy) data in *DICOM RT* files are essential in radiotherapy treatment planning. They include: -* **RT Structure Set**, containing information related to patient anatomy, for example, structures, markers, and isocenters. These entities are typically identified on devices such as CT scanners, physical or virtual simulation workstations, or treatment planning systems. -* **RT Plan**, containing geometric and dosimetric data specifying a course of external beam and/or brachytherapy treatment, for example, beam angles, collimator openings, beam modifiers, and brachytherapy channel and source specifications. The RT Plan entity may be created by a simulation workstation, and subsequently enriched by a treatment planning system before being passed on to a record and verify system or treatment device. An instance of the RT Plan object usually references an RT Structure Set instance to define a coordinate system and set of patient structures. -* **RT Dose**, containing dose data generated by a treatment planning system in one or more of several formats: three-dimensional dose data, isodose curves, DVHs, or dose points. +* **RT Structure Set**: containing information related to patient anatomy, for example, structures, markers, and isocenters. These entities are typically identified on devices such as CT scanners, physical or virtual simulation workstations, or treatment planning systems. +* **RT Plan**: containing geometric and dosimetric data specifying a course of external beam and/or brachytherapy treatment, for example, beam angles, collimator openings, beam modifiers, and brachytherapy channel and source specifications. The **RT Plan** entity may be created by a simulation workstation, and subsequently enriched by a treatment planning system before being passed on to a record and verify system or treatment device. An instance of the **RT Plan** object usually references an **RT Structure Set** instance to define a coordinate system and set of patient structures. +* **RT Dose**: containing dose data generated by a treatment planning system in one or more of several formats: three-dimensional dose data, isodose curves, DVHs, or dose points. Additional objects not used in this tutorial are: -* **RT Image**, specifying radiotherapy images that have been obtained on a conical imaging geometry, such as those found on conventional simulators and portal imaging devices. It can also be used for calculated images using the same geometry, such as digitally reconstructed radiographs (DRRs). -* **RT Beams Treatment Record**, **RT Brachy Treatment Record**, and **RT Treatment Summary Record**, containing data obtained from actual radiotherapy treatments. These objects are the historical record of the treatment, and are linked with the other „planning” objects to form a complete picture of the treatment. +* **RT Image**: specifying radiotherapy images that have been obtained on a conical imaging geometry, such as those found on conventional simulators and portal imaging devices. It can also be used for calculated images using the same geometry, such as digitally reconstructed radiographs (DRRs). +* **RT Beams Treatment Record**, **RT Brachy Treatment Record**, and **RT Treatment Summary Record**: containing data obtained from actual radiotherapy treatments. These objects are the historical record of the treatment, and are linked with the other ”planning” objects to form a complete picture of the treatment. ## Precondition If you do not have DICOM RT data, you can download an example dataset at: @@ -46,9 +46,9 @@ Extract the *.zip* file into a new folder named *DICOM_FILES*. ## Prepare Your Network Add the module `DicomImport` to your workspace. -Then, click {{< mousebutton "left" >}} *Browse* and select the new folder named *DICOM_FILES* where you copied the content of the ZIP file earlier. Click *Import* {{< mousebutton "left" >}}. You can see the result after import below: +Then, click {{< mousebutton "left" >}} Browse and select the new folder named *DICOM_FILES* where you copied the content of the ZIP file earlier. Click Import {{< mousebutton "left" >}}. You can see the result after import below: -![DICOM RT Data in DicomImport module](images/tutorials/image_processing/Example6_1.png "DICOM RT Data in DicomImport module") +![DICOM RT data in DicomImport module](images/tutorials/image_processing/Example6_1.png "DICOM RT data in DicomImport module") The dataset contains an anonymized patient with four series: * RTPLAN \ @@ -73,7 +73,7 @@ You have to select the correct index for the *RTSTRUCT*. In our example it is in ### Visualize RTSTRUCTs as Colored CSOs Add an `ExtractRTStruct` module to the `DicomImportExtraOutput` to convert *RTSTRUCT* data into MeVisLab contours (CSOs). CSOs allow to visualize the contours on the CT scan and to interact with them in MeVisLab. -A preview of the resulting CSOs can be seen in the *Output Inspector*. +A preview of the resulting CSOs can be seen in the Output Inspector. ![ExtractRTStruct in Output Inspector](images/tutorials/image_processing/Example6_3.png "ExtractRTStruct in Output Inspector") @@ -85,7 +85,7 @@ We want to display the names for the contours available in the *RTSTRUCT* file t ![CSOLabelRenderer](images/tutorials/image_processing/Example6_5.png " CSOLabelRenderer") -By default, the ID of the contours is rendered. Open the panel of the `CSOLabelRenderer` and change the *labelString* parameter as seen below. +By default, the ID of the contours is rendered. Open the panel of the `CSOLabelRenderer` and change the labelString parameter as seen below. ![CSOLabelRenderer labelString](images/tutorials/image_processing/Example6_6.png "CSOLabelRenderer labelString") @@ -95,11 +95,11 @@ labelString = cso.getGroupAt(0).getLabel() ``` {{}} -Then, press apply {{< mousebutton "left" >}}. The name of the structure is defined in the group of each CSO. We now show the label of the group next to the contour. Add a `CSOLabelPlacementGlobal` module to define a better readable location of these labels. +Then, press Apply {{< mousebutton "left" >}}. The name of the structure is defined in the group of each CSO. We now show the label of the group next to the contour. Add a `CSOLabelPlacementGlobal` module to define a better readable location of these labels. The module `CSOLabelPlacementGlobal` implements an automatic label placement strategy that considers all CSOs on a slice. -![Edited CSOLabelRenderer Panel](images/tutorials/image_processing/Example6_7.png " Edited CSOLabelRenderer Panel") +![Labels are placed on the current slice](images/tutorials/image_processing/Example6_7.png "Labels are placed on the current slice") ### 3D Visualization of Contours Using `SoExaminerViewer` The contours can also be shown in 3D. @@ -123,19 +123,19 @@ Change update mode of the `Histogram` module to *Auto Update*. Open the panel of the `SoLUTEditor` module and go to tab *Range*. Click {{< mousebutton "left" >}} *Update Range From Histogram* to apply the histogram values for the *Range* of the lookup table. -![Lookup table and Histogram](images/tutorials/image_processing/Example6_10.png "Lookup table and Histogram") +![Histogram for the lookup table](images/tutorials/image_processing/Example6_10.png "Histogram for the lookup table") On tab *Editor*, define a lookup table as seen below. -![Lookup table](images/tutorials/image_processing/Example6_11.png "Lookup table") +![Lookup table with the histogram](images/tutorials/image_processing/Example6_11.png "Lookup table with the histogram") The lookup table shall be used for showing the RT Dose data as a semitransparent overlay on the CT image. Add a `SoView2DOverlay` and a `SoGroup` module to your network. Replace the input of the View2D module from the `SoView2DCSOExtensibleEditor` with the `SoGroup`. -![RT Dose data using SoView2DOverlay](images/tutorials/image_processing/Example6_12.png "RT Dose data using SoView2DOverlay") +![RTDose data using SoView2DOverlay](images/tutorials/image_processing/Example6_12.png "RTDose data using SoView2DOverlay") If you want to visualize the RT Struct contours together with the RT Dose overlay, connect the `SoView2DCSOExtensibleEditor` module and the `SoGroup` module. -![RT Dose and RT Struct](images/tutorials/image_processing/Example6_13.png "RT Dose and RT Struct") +![RTDose and RTStruct](images/tutorials/image_processing/Example6_13.png "RTDose and RTStruct") ## Summary * DICOM RT data can be loaded and processed in MeVisLab. diff --git a/mevislab.github.io/content/tutorials/openinventor.md b/mevislab.github.io/content/tutorials/openinventor.md index 0f3ff5b25..90ea4b0f8 100644 --- a/mevislab.github.io/content/tutorials/openinventor.md +++ b/mevislab.github.io/content/tutorials/openinventor.md @@ -26,11 +26,11 @@ The names of Open Inventor modules start with the prefix `So\*` (for Scene Objec An exemplary Open Inventor scene will be implemented in the following paragraph. ## Open Inventor Scenes and Execution of Scene Graphs{#sceneGraphs} -Inventor scenes are organized in structures called scene graphs. A scene graph is made up of nodes, which represent 3D objects to be drawn, properties of the 3D objects, nodes that combine other nodes and are used for hierarchical grouping, and others (cameras, lights, etc.). These nodes are accordingly called shape nodes, property nodes, group nodes, and so on. Each node contains one or more pieces of information stored in fields. For example, the `SoSphere` node contains only its radius, stored in its radius field. Open Inventor modules function as Open Inventor nodes, so they may have input connectors to add Open Inventor child nodes (modules) and output connectors to link themselves to Open Inventor parent nodes (modules). +Open Inventor scenes are organized in structures called scene graphs. A scene graph is made up of nodes, which represent 3D objects to be drawn, properties of the 3D objects, nodes that combine other nodes and are used for hierarchical grouping, and others (cameras, lights, etc.). These nodes are accordingly called shape nodes, property nodes, group nodes, and so on. Each node contains one or more pieces of information stored in fields. For example, the `SoSphere` node contains only its radius, stored in its radius field. Open Inventor modules function as Open Inventor nodes, so they may have input connectors to add Open Inventor child nodes (modules) and output connectors to link themselves to Open Inventor parent nodes (modules). {{}} The model below depicts the order in which the modules are traversed. The red arrow indicates the traversal order: from top to bottom and from left to right. The modules are numbered accordingly from 1 to 8. Knowing about the traversal order can be crucial to achieve a certain ouput. -![Traversing in Open Inventor](images/tutorials/openinventor/OI1_13.png "Traversing through a network of Open Inventor modules") +![Traversing through a network of Open Inventor modules](images/tutorials/openinventor/OI1_13.png "Traversing through a network of Open Inventor modules") {{}} ## SoGroup and SoSeparator @@ -38,7 +38,7 @@ The `SoGroup` and `SoSeparator` modules can be used as containers for child node ![SoGroup vs. SoSeparator](images/tutorials/openinventor/SoGroup_SoSeparator.png "SoGroup vs. SoSeparator") -In the network above, we render four `SoCone` objects. The left side uses the `SoSeparator` modules, the right side uses the `SoGroup` ones. There is a `SoMaterial` module defining one of the left cone objects to be yellow. As you can see, the `SoMaterial` module is only applied to that cone, the other left cone remains in its default gray color, because the `SoSeparator` module isolates the separator's children from the rest of the scene graph. +In the network above, we render four `SoCone` objects. The left side uses the `SoSeparator` modules, the right side uses the `SoGroup` ones. There is a `SoMaterial` module defining one of the left cone objects to be yellow. As you can see, the `SoMaterial` module is only applied to that cone, the other left cone remains in its default gray color, because the `SoSeparator` module isolates the its children from the rest of the scene graph. On the right side, we are using `SoGroup` ({{< docuLinks "/Standard/Documentation/Publish/ModuleReference/SoGroup.html" "SoGroup module reference" >}}). The material of the cone is set to be of red color. As the `SoGroup` module does not alter the traversal state in any way, the second cone in this group is also colored in red. @@ -50,4 +50,3 @@ Details on these can be found in the {{< docuLinks "/Standard/Documentation/Publ {{< networkfile "examples/open_inventor/SoGroupSoSeparator.mlab" >}} More information about Open Inventor and Scene Graphs can be found {{< docuLinks "/Resources/Documentation/Publish/SDK/GettingStarted/ch07.html" "here" >}} , in the {{< docuLinks "/Standard/Documentation/Publish/Overviews/OpenInventorOverview.html" "Open Inventor Overview" >}} or the [Open Inventor Reference](https://mevislabdownloads.mevis.de/docs/current/MeVis/ThirdParty/Documentation/Publish/OpenInventorReference/index.html). - diff --git a/mevislab.github.io/content/tutorials/openinventor/camerainteraction.md b/mevislab.github.io/content/tutorials/openinventor/camerainteraction.md index 29b7f01c0..4f7567706 100644 --- a/mevislab.github.io/content/tutorials/openinventor/camerainteraction.md +++ b/mevislab.github.io/content/tutorials/openinventor/camerainteraction.md @@ -49,28 +49,28 @@ Whenever you change the camera in the switch, you need to detect the new camera A `SoPerspectiveCamera` camera defines a perspective projection from a viewpoint. -The viewing volume for a perspective camera is a truncated pyramid. By default, the camera is located at (0, 0, 1) and looks along the negative z-axis; the Position and Orientation fields can be used to change these values. The Height Angle field defines the total vertical angle of the viewing volume; this and the Aspect Ratio field determine the horizontal angle. +The viewing volume for a perspective camera is a truncated pyramid. By default, the camera is located at *(0, 0, 1)* and looks along the negative z-axis; the Position and Orientation fields can be used to change these values. The Height Angle field defines the total vertical angle of the viewing volume; this and the Aspect Ratio field determine the horizontal angle. A `SoOrthographicCamera` camera defines a parallel projection from a viewpoint. -This camera does not diminish objects with distance as an SoPerspectiveCamera does. The viewing volume for an orthographic camera is a cuboid (a box). +This camera does not diminish objects with distance as a `SoPerspectiveCamera` does. The viewing volume for an orthographic camera is a cuboid (a box). By default, the camera is located at *(0, 0, 1)* and looks along the negative z-axis; the Position and Orientation fields can be used to change these values. The Height field defines the total height of the viewing volume; this and the Aspect Ratio field determine its width. Add a `SoCameraWidget` and connect it to your `SoGroup`. -![SoCameraWidget](images/tutorials/openinventor/Camera_3.png "SoCameraWidget") +![SoCameraWidget](images/tutorials/openinventor/Camera_4.png "SoCameraWidget") -This module shows a simple widget on an Inventor viewer that can be used to rotate, pan, or zoom the scene. You can configure the *Main Interaction* in the panel of the `SoCameraWidget`. +This module shows a simple widget on an Open Inventor viewer that can be used to rotate, pan, or zoom the scene. You can configure the *Main Interaction* in the panel of the `SoCameraWidget`. You can also add more than one widget to show multiple widgets in the same scene, see example network of the `SoCameraWidget` module. ## The `SoExaminerViewer` Module The `SoExaminerViewer` makes some things much easier, because a camera and a light are already integrated. -Add a `SoExaminerViewer` to your workspace and connect it to the `SoBackground`, the `SoMaterial` and the `SoOrientationModel` modules. +Add a `SoExaminerViewer` to your workspace and connect it to the `SoBackground`, the `SoMaterial`, and the `SoOrientationModel` modules. -![SoExaminerViewer](images/tutorials/openinventor/Camera_4.png "SoExaminerViewer") +![SoExaminerViewer](images/tutorials/openinventor/Camera_5.png "SoExaminerViewer") The difference to the `SoRenderArea` can be seen immediately. You can interact with your scene and a light is available initially. diff --git a/mevislab.github.io/content/tutorials/openinventor/mouseinteractions.md b/mevislab.github.io/content/tutorials/openinventor/mouseinteractions.md index 3a967f112..d8a8fd17c 100644 --- a/mevislab.github.io/content/tutorials/openinventor/mouseinteractions.md +++ b/mevislab.github.io/content/tutorials/openinventor/mouseinteractions.md @@ -18,7 +18,7 @@ menu: {{< youtube "Ye5lOHDWcRo" >}} ## Introduction -In this example, we implement some image or object interactions. We will create a 3D scene, in which we display a cube and change its size using the mouse. We also get to know another viewer, the module `SoExaminerViewer`. This viewer is important. It enables the rendering of Open Inventor scenes and allows interactions with the Open Inventor scenes. +In this example, we implement some image or object interactions. We will create a 3D scene in which we display a cube and change its size using the mouse. We also get to know another viewer, the module `SoExaminerViewer`. This viewer is important: It enables the rendering of Open Inventor scenes and allows interactions with the Open Inventor scenes. ## Steps to Do @@ -31,7 +31,7 @@ Additional information about the `SoMouseGrabber` can be found here: {{< docuLin [//]: <> (MVL-653) -![SoMouseGrabber](images/tutorials/openinventor/V5_01.png "SoMouseGrabber") +![Network with a SoMouseGrabber](images/tutorials/openinventor/V5_01.png "Network with a SoMouseGrabber") ### Configure Mouse Interactions Now, open the panels of the module `SoMouseGrabber` and the module `SoExaminerViewer`, which displays a cube. In the viewer, press the right button of your mouse {{< mousebutton "right" >}} and move the mouse around. This action can be seen in the panel of the module SoMouseGrabber. @@ -39,23 +39,23 @@ Now, open the panels of the module `SoMouseGrabber` and the module `SoExaminerVi Make sure to configure `SoMouseGrabber` fields as seen below. {{}} -![SoMouseGrabber](images/tutorials/openinventor/V5_02.png "SoMouseGrabber") +![Network with a SoMouseGrabber and panels](images/tutorials/openinventor/V5_02.png "Network with a SoMouseGrabber and panels") **You can see:** 1. Button 3, the right mouse button {{< mousebutton "right" >}}, is tagged as being pressed 2. Changes of the mouse coordinates are displayed in the box *Output* -![Mouse Interactions](images/tutorials/openinventor/V5_03.png "Mouse Interactions") +![Mouse interactions](images/tutorials/openinventor/V5_03.png "Mouse interactions") ### Resize Cube via Mouse Interactions -We like to use the detected mouse movements to change the size of our cube. In order to that, open the panel of `SoCube`. Build parameter connections from the mouse coordinates to the width and depth of the cube. +We like to use the detected mouse movements to change the size of our cube. In order to that, open the panel of `SoCube`. Establish parameter connections from the mouse coordinates to the width and depth of the cube. -![Change Cube Size With Mouse Events](images/tutorials/openinventor/V5_04.png "Change Cube Size With Mouse Events") +![Change cube size with mouse events](images/tutorials/openinventor/V5_04.png "Change cube size with mouse events") If you now press the right mouse button {{< mousebutton "right" >}} in the viewer and move the mouse around, the size of the cube changes. ## Exercises -1. Change location of the cube via Mouse Interactions by using the Module `SoTransform`. +1. Change location of the cube via Mouse Interactions by using the module `SoTransform`. 1. Add more objects to the scene and interact with them. ## Summary diff --git a/mevislab.github.io/content/tutorials/openinventor/openinventorobjects.md b/mevislab.github.io/content/tutorials/openinventor/openinventorobjects.md index 7ae777ea6..bca0d52dd 100644 --- a/mevislab.github.io/content/tutorials/openinventor/openinventorobjects.md +++ b/mevislab.github.io/content/tutorials/openinventor/openinventorobjects.md @@ -29,13 +29,13 @@ First, add the modules `SoExaminerViewer` and `SoCone` to the workspace and conn We like to change the color of the cone. In order to do so, add the module `SoMaterial` to the workspace and connect the module as shown below. When creating an Open Inventor scene (by creating networks of Open Inventor modules), the sequence of module connections, in this case the sequence of the inputs to the module `SoExaminerViewer`, determines the functionality of the network. -Open Inventor modules are executed like scene graphs. This means modules are executed from top to bottom and from left to right. Here, it is important to connect the module `SoMaterial` to an input on the left side of the connection between `SoCone` and `SoExaminerViewer`. With this, we first select features like a color and these features are then assigned to all objects, which were executed afterward. Now, open the panel of the module `SoMaterial` and select any *Diffuse Color* you like. Here, we choose green. +Open Inventor modules are executed like scene graphs. This means modules are traversed from top to bottom and from left to right. Here, it is important to connect the module `SoMaterial` to an input on the left side of the connection between `SoCone` and `SoExaminerViewer`. With this, we first select features like a color and these features are then assigned to all objects, which were executed afterward. Now, open the panel of the module `SoMaterial` and select any *Diffuse Color* you like. Here, we choose green. -![Colors and Material in Open Inventor](images/tutorials/openinventor/OI1_02.png "Colors and Material in Open Inventor") +![Color and material in Open Inventor](images/tutorials/openinventor/OI1_02.png "Color and material in Open Inventor") We like to add a second object to the scene. -In order to do that, add the module `SoSphere` to the workspace. Connect this module to `SoExaminerViewer`. When connecting `SoSphere` to an input on the right side of the connection between the viewer and the module `SoMaterial`, the sphere is also colored in green. One problem now is, that currently both objects are displayed at the same position. +In order to do that, add the module `SoSphere` to the workspace. Connect this module to `SoExaminerViewer`. When connecting `SoSphere` to an input on the right side of the connection between the viewer and the module `SoMaterial`, the sphere is also colored in green. One problem now is that currently both objects are displayed at the same position. ![Adding a SoSphere](images/tutorials/openinventor/OI1_03.png "Adding a SoSphere") @@ -44,7 +44,7 @@ They display both objects at different positions, add the modules `SoSeparator` 1. The sphere loses its green color 2. The cone is shifted to the side -![Transformation](images/tutorials/openinventor/OI1_05.png "Transformation") +![Transformation with SoTransform](images/tutorials/openinventor/OI1_05.png "Transformation with SoTransform") The module `SoTransform` is responsible for shifting objects, in this case the cone, to the side. The module `SoSeparator` ensures that only the cone is shifted and also only the cone is colored in green. It separates this features from the rest of the scene. @@ -54,23 +54,23 @@ We like to add a third object, a cube, and shift it to the other side of the sph Again, we use the module `SoMaterial` to select a color for the cone and the sphere. -![Multiple Materials](images/tutorials/openinventor/OI1_08.png "Multiple Materials") +![Multiple materials](images/tutorials/openinventor/OI1_08.png "Multiple materials") For an easier handling, we group an object together with its features by using the module `SoGroup`. This does not separate features, which is the reason for the cube to be colorized. All modules that are derived from `SoGroup` offer a basically infinite number of input connectors (a new connector is added for every new connection). -![SoGroup](images/tutorials/openinventor/OI1_09.png "SoGroup") +![Grouping modules with SoGroup](images/tutorials/openinventor/OI1_09.png "Grouping modules with SoGroup") If we do not want to colorize the cube, we have to exchange the module `SoGroup` for another `SoSeparator` module. -![SoSeparator](images/tutorials/openinventor/OI1_10.png "SoSeparator") +![Grouping modules with SoSeparator](images/tutorials/openinventor/OI1_10.png "Grouping modules with SoSeparator") The implementation of all objects can be grouped together. -![Grouping](images/tutorials/openinventor/OI1_11.png "Grouping") +![Grouping modules in the network with a network group](images/tutorials/openinventor/OI1_11.png "Grouping modules in the network with a network group") In addition to the objects, a background can be added to the scene using the module `SoBackground`. -![SoBackground](images/tutorials/openinventor/OI1_12.png "SoBackground") +![Using a SoBackground](images/tutorials/openinventor/OI1_12.png "Using a SoBackground") ## Summary * Scene objects are represented by nodes. diff --git a/mevislab.github.io/content/tutorials/openinventor/posteffectsinopeninventor.md b/mevislab.github.io/content/tutorials/openinventor/posteffectsinopeninventor.md index c296b94a2..d328dce50 100644 --- a/mevislab.github.io/content/tutorials/openinventor/posteffectsinopeninventor.md +++ b/mevislab.github.io/content/tutorials/openinventor/posteffectsinopeninventor.md @@ -23,14 +23,13 @@ In this tutorial, we will go over the steps to add shadows to our 3D objects, ma ## Steps to Follow ### From DICOM to Scene Object - -To incorporate DICOMs into your Open Inventor Scene, they have to be rendered as Open Inventor objects, which can be done by converting them into [WEMs](glossary/#winged-edge-meshes) first. Begin by adding the modules `LocalImage`, `WEMIsoSurface`, and `SoWEMRenderer` to your workspace. Open the panel of the `LocalImage` module, browse your files, and choose a DICOM with multiple frames as input data. Connect the `LocalImage` module's output connector to `WEMIsoSurface` module's input connector to create a WEM of the study's surface. Then, connect the `WEMIsoSurface` module's output connector to the `SoWEMRenderer` module's input connector to render a scene object that can be displayed by adding a `SoExaminerViewer` module to the workspace and connecting the `SoWEMRenderer` module's output connector to its input connector. +To incorporate DICOMs into your Open Inventor scene, they have to be rendered as Open Inventor objects, which can be done by converting them into [WEMs](glossary/#winged-edge-meshes) first. Begin by adding the modules `LocalImage`, `WEMIsoSurface`, and `SoWEMRenderer` to your workspace. Open the panel of the `LocalImage` module, browse your files, and choose a DICOM with multiple frames as input data. Connect the `LocalImage` module's output connector to `WEMIsoSurface` module's input connector to create a WEM of the study object's surface. Then, connect the `WEMIsoSurface` module's output connector to the `SoWEMRenderer` module's input connector to render a scene object that can be displayed by adding a `SoExaminerViewer` module to the workspace and connecting the `SoWEMRenderer` module's output connector to its input connector. {{}} We don't recommend using single-frame DICOMs for this example as a certain depth is required to interact with the scene objects as intended. Also make sure that the pixel data of the DICOM file you choose contains all slices of the study, as it might be difficult to arrange scene objects of individual slices to resemble the originally captured study. {{}} -![From DICOM to SO](images/tutorials/openinventor/multiframetoso.PNG "How to create a scene object out of a multi-frame DICOM") +![How to create a scene object out of a multi-frame DICOM](images/tutorials/openinventor/multiframetoso.PNG "How to create a scene object out of a multi-frame DICOM") {{}} Consider adding a `View2D` and an `Info` module to your `LocalImage` module's output connector to be able to compare the rendered object with the original image and adapt the isovalues to minimize noise. @@ -51,18 +50,18 @@ Structuring the workspace by grouping modules based on their functionality helps Use a `SoPostEffectMainGeometry` module to connect both of the groups you just created to the `SoExaminerViewer` module. Lastly, add a `SoPostEffectRenderer` module to your workspace and connect its output connector to the `SoExaminerViewer` module's input connector. -![Grouped](images/tutorials/openinventor/GroupedModules.PNG "Grouped modules") +![Grouped modules](images/tutorials/openinventor/GroupedModules.PNG "Grouped modules") You can now change your Open Inventor scene's background color. ### PostEffectEdges Add the module `SoPostEffectEdges` to your workspace and connect its output connector with the `SoExaminerViewer` module's input connector. -Then, open its panel and choose a color. You can try different modes, sampling distances and thresholds: +Then, open its panel and choose a color. You can try different modes, sampling distances, and thresholds: -![Colored Edges](images/tutorials/openinventor/Edges1.PNG "Colored edges") -![Colored Edges 2](images/tutorials/openinventor/Edges2.PNG "Varying settings of colored edges") -![Colored Edges 3](images/tutorials/openinventor/Edges3.PNG "Varying settings of colored edges") +![Colored edges](images/tutorials/openinventor/Edges1.PNG "Colored edges") +![Varying settings of colored edges, mode: Roberts](images/tutorials/openinventor/Edges2.PNG "Varying settings of colored edges, mode: Roberts") +![Varying settings of colored edges, mode: Sobel](images/tutorials/openinventor/Edges3.PNG "Varying settings of colored edges, mode: Sobel") ### PostEffectGeometry To include geometrical objects in your Open Inventor scene, add two `SoSeparator` modules to the workspace and connect them to the input connector of `SoPostEffectMainGeometry`. Then, add a `SoMaterial`, `SoTransform`, and `SoSphere` or `SoCube` module to each `SoSeparator` and adjust their size (using the panel of the `SoSphere` or `SoCube` module) and placement within the scene (using the panel of the `SoTransform` module) as you like. @@ -71,15 +70,15 @@ To include geometrical objects in your Open Inventor scene, add two `SoSeparator You'll observe that the transparency setting in the `SoMaterial` module does not apply to the geometrical objects. Add a `SoPostEffectTransparentGeometry` module to your workspace, connect its output connector to the `SoExaminerViewer` module's input connector and its input connectors to the `SoSeparator` module's output connector to create transparent geometrical objects in your scene. {{}} - ![Workspace](images/tutorials/openinventor/WorkspaceAndNetwork.PNG "Workspace") + ![Network with additional opaque and transparent geometry](images/tutorials/openinventor/WorkspaceAndNetwork.PNG "Network with additional opaque and transparent geometry") ### PostEffectGlow To put a soft glow on the geometrical scene objects, the module `SoPostEffectGlow` can be added to the workspace. -![Glow](images/tutorials/openinventor/WorkspaceWithGlow.PNG "Applied SoPostEffectGlow") +![Applied SoPostEffectGlow](images/tutorials/openinventor/WorkspaceWithGlow.PNG "Applied SoPostEffectGlow") ## Summary * Multi-frame DICOM images can be rendered to be scene objects by converting them into WEMs first. -* Open Inventor scenes can be augmented by adding PostEffects to scene objects. +* Open Inventor scenes can be augmented by adding *PostEffects* to scene objects. {{< networkfile "examples/open_inventor/PostEffectTutorial.mlab" >}} diff --git a/mevislab.github.io/content/tutorials/shorts.md b/mevislab.github.io/content/tutorials/shorts.md index f70a71274..68691eaf0 100644 --- a/mevislab.github.io/content/tutorials/shorts.md +++ b/mevislab.github.io/content/tutorials/shorts.md @@ -20,8 +20,8 @@ This chapter shows some features and functionalities that are helpful but do not * [Using Snippets](tutorials/shorts#snippets) * [Scripting Assistant](tutorials/shorts#scriptingassistant) * [User Scripts](tutorials/shorts#user_scripts) -* [Show status of module in- and output](tutorials/shorts#mlimagestate) -* [Module suggestion of module in- and output](tutorials/shorts#modulesuggest) +* [Show Status of Module Inputs and Outputs](tutorials/shorts#mlimagestate) +* [Module Suggestion of Module Inputs and Outputs](tutorials/shorts#modulesuggest) ## Keyboard Shortcuts {#shortcuts} This is a collection of useful keyboard shortcuts in MeVisLab. @@ -34,31 +34,31 @@ This is a collection of useful keyboard shortcuts in MeVisLab. - {{< keyboard "CTRL" "1" >}} + {{< keyboard "Ctrl" "1" >}} Automatically arrange selection of modules in the current network - {{< keyboard "CTRL" "2" >}} + {{< keyboard "Ctrl" "2" >}} Open most recent network file - {{< keyboard "CTRL" "3" >}} + {{< keyboard "Ctrl" "3" >}} Run most recent test case (extremely useful for developers) - {{< keyboard "CTRL" "A" >}} then {{< keyboard "CTRL" "1" >}} + {{< keyboard "Ctrl" "A" >}} then {{< keyboard "Ctrl" "1" >}} Layout network - {{< keyboard "CTRL" "A" >}} then {{< keyboard "TAB" >}} - Layout *.script* file (in MATE) + {{< keyboard "Ctrl" "A" >}} then {{< keyboard "TAB" >}} + Layout .script file (in MATE) - {{< keyboard "CTRL" "D" >}} + {{< keyboard "Ctrl" "D" >}} Duplicate currently selected module (including all field values) - {{< keyboard "CTRL" >}} and Left Mouse Button {{< mousebutton "left" >}} or Middle Mouse Button {{< mousebutton "middle" >}} + {{< keyboard "Ctrl" >}} and Left Mouse Button {{< mousebutton "left" >}} or Middle Mouse Button {{< mousebutton "middle" >}} Show internal network @@ -66,19 +66,19 @@ This is a collection of useful keyboard shortcuts in MeVisLab. Show hidden outputs of the currently selected module - {{< keyboard "CTRL" "ALT" "T" >}} - Start test center + {{< keyboard "Ctrl" "Alt" "T" >}} + Start TestCaseManager - {{< keyboard "CTRL" "K" >}} + {{< keyboard "Ctrl" "K" >}} Restart MeVisLab with current network(s) - {{< keyboard "CTRL" "R" >}} - Run script file with the same name of your network file if available in the same directory. + {{< keyboard "Ctrl" "R" >}} + Run script file with the same name of your network file if available in the same directory - {{< keyboard "ALT" >}} Double-click {{< mousebutton "left" >}} on a module + {{< keyboard "Alt" >}} Double-click {{< mousebutton "left" >}} on a module Open automatic panel of the module. @@ -87,13 +87,13 @@ This is a collection of useful keyboard shortcuts in MeVisLab. ## Using Snippets {#snippets} {{< youtube "xX7wJiyfxhA" >}} -Sometimes you have to create the same network over and over again -- for example, to quickly preview DICOM files. Generally, you will at least add one module to load and another module to display your images. Sometimes you may also want to view the DICOM header data. A network you possibly generate whenever opening DICOM files will be the following: +Sometimes you have to create the same network over and over again — for example, to quickly preview DICOM files. Generally, you will at least add one module to load and another module to display your images. Sometimes you may also want to view the DICOM header data. A network you possibly generate whenever opening DICOM files will be the following: -![Open DICOM files](images/tutorials/Snippets_Network.png "Open DICOM files") +![Open and view DICOM files](images/tutorials/Snippets_Network.png "Open and view DICOM files") -Create a snippet of your commonly used networks by adding the snippets list from the main menu. Open {{< menuitem "View" "Views" "Snippets List">}}. A new panel is shown. Select all modules of your network and double-click *New...* in your *Snippets List*. +Create a snippet of your commonly used networks by adding the snippets list from the main menu. Open {{< menuitem "View" "Views" "Snippets List">}}. A new panel is shown. Select all modules of your network and double-click *New...* in your Snippets List. -Enter a name for your snippet like *DICOM Viewer* and click *Add*. +Enter a name for your snippet like *DICOM Viewer* and click {{< mousebutton "left" >}} Add. A new snippet will be shown in your Snippets List. You can drag and drop the snippet to your workspace and the modules are reused, including all defined field values. @@ -139,7 +139,7 @@ UserIDEMenus { ``` {{}} -We define an action *Set Dark Theme*, which is added to the submenu *Theme* in the MeVisLab IDE menu item {{< menuitem "Scripting">}}. The action is named *changeTheme* and a reference to a Python script is added as *$(LOCAL)/changeTheme.py*. We also defined a keyboard shortcut {{< keyboard "ctrl+F9" >}}. +We define an action *Set Dark Theme*, which is added to the submenu *Theme* in the MeVisLab IDE menu item {{< menuitem "Scripting">}}. The action is named changeTheme and a reference to a Python script is added as *$(LOCAL)/changeTheme.py*. We also defined a keyboard shortcut {{< keyboard "Ctrl" "F9" >}}. Change to MeVisLab IDE and select menu item {{< menuitem "Extras" "Reload Module Database (Clear Cache)">}}. Open the menu item {{< menuitem "Scripting">}}. You can see the new submenu {{< menuitem "Theme" "Set Dark Theme">}}. If you select this entry, you get an error in MeVisLab console: *Could not locate user script: .../changeTheme.py* @@ -166,18 +166,18 @@ QApplication.setPalette(palette) This script defines the color of the MeVisLab user interface elements. You can define other colors and more items; this is just an example of what you can do with user scripts. -Switch back to the MeVisLab IDE and select the menu item {{< menuitem "Extras" "Reload Module Database (Clear Cache)">}} again. The colors of the MeVisLab IDE change as defined in our Python script. This change persists until you restart MeVisLab and can always be repeated by selecting the menu entry or pressing the keyboard shortcut {{< keyboard "ctrl+F9" >}}. +Switch back to the MeVisLab IDE and select the menu item {{< menuitem "Extras" "Reload Module Database (Clear Cache)">}} again. The colors of the MeVisLab IDE change as defined in our Python script. This change persists until you restart MeVisLab and can always be repeated by selecting the menu entry or pressing the keyboard shortcut {{< keyboard "Ctrl" "F9" >}}. ## Show Status of Module Input and Output {#mlimagestate} Especially in large networks it is useful to see the state of the input and output connectors of a module. By default, the module connectors do not show if data is available. Below image shows a `DicomImport` module and a `View2D` module where no data is loaded. ![No status on connector](images/tutorials/LMIMageState_Off.png "No status on connector") -In the MeVisLab preferences dialog, you can see a checkbox *Show ML image state*. By default, the setting is *Off*. +In the MeVisLab preferences dialog, you can see a checkbox Show ML image state. By default, the setting is *Off*. ![Show ML image state](images/tutorials/LMIMageState.png "Show ML image state") -After enabling *Show ML image state*, your network changes and the input and output connectors appear red in the case no data is available at the output. +After enabling Show ML image state, your network changes and the input and output connectors appear red in the case no data is available at the output. ![No data on connector](images/tutorials/LMIMageState_On_1.png "No data on connector") diff --git a/mevislab.github.io/content/tutorials/summary.md b/mevislab.github.io/content/tutorials/summary.md index 16c13647c..7c7bdd917 100644 --- a/mevislab.github.io/content/tutorials/summary.md +++ b/mevislab.github.io/content/tutorials/summary.md @@ -18,7 +18,9 @@ menu: ## Summary This chapter will summarize all previous chapters and you will develop an entire application in MeVisLab. The complete workflow from developing a prototype to delivering your final application to your customer is explained step-by-step. -![Prototype to Product](images/tutorials/summary/Prototyping.png "Prototype to Product") +![Prototype to product](images/tutorials/summary/Prototyping.png "Prototype to product") + + {{}} Some of the features described here will require a separate license. Building an installable executable requires the **MeVisLab ApplicationBuilder** license. It extends the **MeVisLab SDK**, so that you can generate an installer of your developed macro module. @@ -46,12 +48,12 @@ In the first step, you are developing an application based on the following requ * **Requirement 9.3**: All ### Step 2: Create Your Macro Module -Your network will be encapsulated in a macro module for later application development. For details about macro modules, see [Example 2.2: Global macro modules](tutorials/basicmechanisms/macromodules/globalmacromodules/). +Your network will be encapsulated in a macro module for later application development. For details about macro modules, see [Example 2.2: Global Macro Modules](tutorials/basicmechanisms/macromodules/globalmacromodules/). ### Step 3: Develop a User Interface and Add Python Scripting {#UIDesign} -Develop the UI and Python Scripts based on your requirements from Step 1. The resulting UI will look like the below mockup: +Develop the UI and Python scripts based on your requirements from Step 1. The resulting UI will look like the below mockup: -![User Interface Design](images/tutorials/summary/UIMockUp.png "User Interface Design") +![User interface design](images/tutorials/summary/UIMockUp.png "User interface design") ## Review ### Step 4: Write Automated Tests for Your Macro Module diff --git a/mevislab.github.io/content/tutorials/summary/summary1.md b/mevislab.github.io/content/tutorials/summary/summary1.md index 0356adff2..406420a2f 100644 --- a/mevislab.github.io/content/tutorials/summary/summary1.md +++ b/mevislab.github.io/content/tutorials/summary/summary1.md @@ -23,7 +23,7 @@ In this example, we will develop a network that fulfills the requirements mentio ## Steps to Do ### 2D Viewer -The 2D viewer shall visualize the loaded images. In addition to that, it shall be possible to click into the image to trigger a region growing algorithm to segment parts of the loaded image based on a position and a threshold. +The 2D viewer shall visualize the loaded images. In addition to that, it shall be possible to click {{< mousebutton "left" >}} into the image to trigger a region growing algorithm to segment parts of the loaded image based on a position and a threshold. The following requirements from the [overview](tutorials/summary#DevelopNetwork) will be implemented: * **Requirement 1**: The application shall be able to load DICOM data @@ -40,7 +40,7 @@ Add a `LocalImage` and a `View2D` module to your workspace. You are now able to Region growing requires a `SoView2DMarkerEditor`, a `SoView2DOverlay`, and a `RegionGrowing` module. Add them to your network and connect them as seen below. Configure the `RegionGrowing` module to use a *3D-6-Neighborhood (x,y,z)* relation and an automatic threshold value of *1.500*. Also select *Auto-Update*. -Set `SoView2DMarkerEditor` to allow only one marker by defining *Max Size = 1* and *Overflow Mode = Remove All*. For our application we only want one marker to be set for defining the `RegionGrowing`. +Set `SoView2DMarkerEditor` to allow only one marker by defining Max Size of *1* and Overflow Mode *Remove All*. For our application, we only want one marker to be set for defining the `RegionGrowing`. If you now click into your loaded image via left mouse button {{< mousebutton "left" >}}, the `RegionGrowing` module segments all neighborhood voxels with a mean intensity value plus/minus the defined percentage value from your click position. @@ -48,16 +48,16 @@ The overlay is shown in white. ![RegionGrowing via marker editor](images/tutorials/summary/Example1_2.png "RegionGrowing via marker editor") -Open the `SoView2DOverlay` module, change *Blend Mode* to *Blend*, and select any color and *Alpha Factor* for your overlay. The applied changes are immediately visible. +Open the `SoView2DOverlay` module, change Blend Mode to *Blend*, and select any color and Alpha Factor for your overlay. The applied changes are immediately visible. ![Overlay color and transparency](images/tutorials/summary/Example1_3.png "Overlay color and transparency") -The segmented results from the `RegionGrowing` module might contain gaps because of differences in the intensity value of neighboring voxels. You can close these gaps by adding a `CloseGap` module. Connect it to the `RegionGrowing` and the `SoView2DOverlay` module and configure *Filter Mode* as *Binary Dilatation*, *Border Handling* as *Pad Dst Fill*, and set *KernelZ* to *3*. +The segmented results from the `RegionGrowing` module might contain gaps because of differences in the intensity value of neighboring voxels. You can close these gaps by adding a `CloseGap` module. Connect it to the `RegionGrowing` and the `SoView2DOverlay` module and configure Filter Mode as *Binary Dilatation*, Border Handling as *Pad Dst Fill*, and set KernelZ to *3*. Lastly, we want to calculate the volume of the segmented parts. Add a `CalculateVolume` module to the `CloseGap` module. The 2D viewer now provides the basic functionalities. You can group the modules in your network for an improved overview by selecting {{}}. Leave `LocalImage` out of the group and name it *2D Viewer*. Your network should now look like this: -![Group 2D Viewer](images/tutorials/summary/Example1_4.png "Group 2D Viewer") +![Group 2D viewer](images/tutorials/summary/Example1_4.png "Group 2D viewer") ### 3D Viewer The 3D viewer shall visualize your loaded image in 3D and additionally provide the possibility to render your segmentation results. You will be able to decide for different views, displaying the image and the segmentation, only the image or only the segmentation. The volume (in ml) of your segmentation results shall be calculated. @@ -72,23 +72,23 @@ The following requirements from [overview](tutorials/summary#DevelopNetwork) wil * **Requirement 9.2**: Segmentation results * **Requirement 9.3**: All -Add a `SoExaminerViewer`, a `SoWEMRenderer`, and an `IsoSurface` module to your existing network and connect them to the `LocalImage` module. Configure the `IsoSurface` to use an *IsoValue* of *200*, a *Resolution* of *1* and check *Auto-Update* and *Auto-Apply*. +Add a `SoExaminerViewer`, a `SoWEMRenderer`, and an `IsoSurface` module to your existing network and connect them to the `LocalImage` module. Configure the `IsoSurface` to use an IsoValue of *200*, a Resolution of *1* and check Auto-Update and Auto-Apply. -![3D Viewer](images/tutorials/summary/Example1_5.png "3D Viewer") +![3D viewer](images/tutorials/summary/Example1_5.png "3D viewer") The result should be a three-dimensional rendering of your image. ![SoExaminerViewer](images/tutorials/summary/Example1_6.png "SoExaminerViewer") {{}} -If the rendering is not immediately applied, click *Apply* in your `IsoSurface` module. +If the rendering is not immediately applied, click Apply {{< mousebutton "left" >}} of your `IsoSurface` module. {{}} -Define the field instanceName of your `IsoSurface` module as IsoSurfaceImage and add another `IsoSurface` module to your network. Set the instanceName to *IsoSurfaceSegmentation* and connect the module to the output of the `CloseGap` module from the image segmentation. Set IsoValue to *420*, Resolution to *1*, and check Auto-Update and Auto-Apply. +Define the field instanceName of your `IsoSurface` module as *IsoSurfaceImage* and add another `IsoSurface` module to your network. Set the instanceName to *IsoSurfaceSegmentation* and connect the module to the output of the `CloseGap` module from the image segmentation. Set IsoValue to *420*, Resolution to *1*, and check Auto-Update and Auto-Apply. Set instanceName of the `SoWEMRenderer` module to *SoWEMRendererImage* and add another `SoWEMRenderer` module. Set this instanceName to *SoWEMRendererSegmentation* and connect it to the `IsoSurfaceSegmentation` module. Selecting the output of the new `SoWEMRenderer` shows the segmented parts as a 3D object in the output inspector. -![Segmentation preview in output inspector](images/tutorials/summary/Example1_7.png "Segmentation preview in output inspector") +![Segmentation preview in the Output Inspector](images/tutorials/summary/Example1_7.png "Segmentation preview in the Output Inspector") Once again, we should group the modules used for 3D viewing and name the new group *3D Viewer*. @@ -109,16 +109,16 @@ The default input of the switch is *None*. Your 3D viewer remains black. Using t Add a `SoGroup` module and connect both `SoWEMRenderer` modules as input. The output needs to be connected to the right input of the `SoSwitch` module. -![SoGroup](images/tutorials/summary/Example1_10.png "SoGroup") +![Using a SoGroup to combine the 3D outputs](images/tutorials/summary/Example1_10.png "Using a SoGroup to combine the 3D outputs") You can now also toggle input *2* of the switch showing both 3D objects. The only problem is: You cannot see the brain because it is located inside the head. Open the `SoWEMRendererImage` module panel and set faceAlphaValue to *0.5*. The viewer now shows the head in a semitransparent manner, so that you can see the brain. Certain levels of opacity are difficult to render. Add a `SoDepthPeelRenderer` module and connect it to the semitransparent `SoWEMRendererImage` module. Set Layers of the renderer to *1*. -![SoDepthPeelRenderer](images/tutorials/summary/Example1_Both.png "SoDepthPeelRenderer") +![Using a SoDepthPeelRenderer to correct transparency artifacts](images/tutorials/summary/Example1_Both.png "Using a SoDepthPeelRenderer to correct transparency artifacts") You have a 2D and a 3D viewer now. Let's define the colors of the overlay to be reused for the 3D segmentation. ### Parameter Connections for Visualization -Open the panels of the `SoView2DOverlay` and the `SoWEMRendererSegmentation` module. Draw a parameter connection from SoView2DOverlay.baseColor to SoWEMRendererSegmentation.faceDiffuseColor. +Open the panels of the `SoView2DOverlay` and the `SoWEMRendererSegmentation` module. Establish a parameter connection from SoView2DOverlay.baseColor to SoWEMRendererSegmentation.faceDiffuseColor. ![Synchronized segmentation colors](images/tutorials/summary/Example1_11.png "Synchronized segmentation colors") @@ -126,6 +126,6 @@ Now, the 3D visualization uses the same color as the 2D overlay. ## Summary * You built a network providing the basic functionalities of your application. -* Actions inside your application need to be executed by changing fields in your network or by manually touching a trigger. +* Actions inside your application are executed by changing fields in your network or by manually touching a trigger. {{< networkfile "examples/summary/TutorialSummary.mlab" >}} diff --git a/mevislab.github.io/content/tutorials/summary/summary2.md b/mevislab.github.io/content/tutorials/summary/summary2.md index a81c215a8..e70ee7f2c 100644 --- a/mevislab.github.io/content/tutorials/summary/summary2.md +++ b/mevislab.github.io/content/tutorials/summary/summary2.md @@ -8,7 +8,7 @@ tags: ["Advanced", "Tutorial", "Prototyping", "Macro modules"] menu: main: identifier: "summaryexample2" - title: "Create a Macro Module From Your Network" + title: "Create a Macro Module from Your Network" weight: 810 parent: "summary" --- @@ -26,26 +26,26 @@ Make sure to have your *.mlab* file from the previous [tutorial](tutorials/summa ### Package Creation Packages are described in detail in [Example 2.1: Package creation](tutorials/basicmechanisms/macromodules/package/). If you already have your own package, you can skip this part and continue creating a macro module. -Open the Project Wizard via {{< menuitem "File" "Run Project Wizard..." >}} and select *New Package*. Run the Wizard and enter details of your new package and click *Create*. +Open the Project Wizard via {{< menuitem "File" "Run Project Wizard..." >}} and select *New Package*. Run the Wizard and enter details of your new package and click Create. ![Package wizard](images/tutorials/summary/Example2_1.png "Package wizard") MeVisLab reloads and you can start creating your macro module. ### Create a Macro Module -Open the Project Wizard via {{< menuitem "File" "Run Project Wizard..." >}} and select *macro module*. Run the Wizard and enter details of your new macro module. +Open the Project Wizard via {{< menuitem "File" "Run Project Wizard..." >}} and select *Macro Module*. Run the Wizard and enter details of your new macro module. -![Macro module wizard](images/tutorials/summary/Example2_2.png "Macro module wizard") +![Macro module wizard: module properties](images/tutorials/summary/Example2_2.png "Macro module wizard: module properties") -Select the created package and click *Next*. +Select the created package and click Next >. -![Macro module wizard](images/tutorials/summary/Example2_3.png "Macro module wizard") +![Macro module wizard: macro module properties](images/tutorials/summary/Example2_3.png "Macro module wizard: macro module properties") -Select your *.mlab* file from [Step 1](tutorials/summary/summary1/) and check *Add Python file*. Click *Next*. +Select your *.mlab* file from [Step 1](tutorials/summary/summary1/) and check Add Python file. Click Next >. -![Macro module wizard](images/tutorials/summary/Example2_4.png "Macro module wizard") +![Macro module wizard: module field interface](images/tutorials/summary/Example2_4.png "Macro module wizard: module field interface") -You do not have to define fields of your macro module now, we will do that later. Click *Create*. The file explorer opens showing the directory of your macro module. It should be the same directory you selected for your Package. +You do not have to define fields of your macro module now, we will do that later. Click Create. The file explorer opens showing the directory of your macro module. It should be the same directory you selected for your Package. ### Directory Structure of a Macro Module The directory structure for a macro module is as follows: @@ -60,7 +60,7 @@ The directory structure for a macro module is as follows: * .py * .script -![Directory Structure](images/tutorials/summary/Example2_6.png "Directory Structure") +![Directory structure](images/tutorials/summary/Example2_6.png "Directory structure") #### Definition (*.def*) File The initial *.def* file contains information you entered into the Wizard for the macro module. @@ -79,7 +79,7 @@ Macro module TutorialSummary { ``` {{}} -An *externalDefinition* to a script file is also added (see below for the *.script* file). +An externalDefinition to a *.script* file is also added (see below for the *.script* file). #### MeVisLab Network (*.mlab*) File The *.mlab* file is a copy of the *.mlab* file you developed in [Step 1](tutorials/summary/summary1/) and reused in the wizard. In the next chapters, this file will be used as *internal network*. @@ -116,13 +116,13 @@ The source also defines your Python file to be used when calling functions and e ### Using Your Macro Module As you created a global macro module, you can search for it in the MeVisLab *Module Search*. -![Module Search](images/tutorials/summary/Example2_7.png "Module Search") +![Module search](images/tutorials/summary/Example2_7.png "Module search") -We did not define inputs or outputs. You cannot connect your module to others. In addition to that, we did not develop a user interface. Double-clicking your module {{< mousebutton "left" >}} only opens the automatic panel showing the *instanceName*. +We did not define inputs or outputs. You cannot connect your module to others. In addition to that, we did not develop a user interface. Double-clicking your module {{< mousebutton "left" >}} only opens the automatic panel showing the instanceName. -![Automatic Panel](images/tutorials/summary/Example2_8.png "Automatic Panel") +![Automatic panel](images/tutorials/summary/Example2_8.png "Automatic panel") -Right-click on your module allows you to open the internal network as developed in [Step 1](tutorials/summary/summary1/). +Right-click {{< mousebutton "right" >}} on your module allows you to open the internal network as developed in [Step 1](tutorials/summary/summary1/). ## Summary * Macro modules encapsulate an entire MeVisLab network including all modules. diff --git a/mevislab.github.io/content/tutorials/summary/summary3.md b/mevislab.github.io/content/tutorials/summary/summary3.md index 249a94182..3c0b60c42 100644 --- a/mevislab.github.io/content/tutorials/summary/summary3.md +++ b/mevislab.github.io/content/tutorials/summary/summary3.md @@ -70,9 +70,12 @@ Window { ``` {{}} +{{}} +We use *Category* as the top-level layouter in the *Window* to give the inner content a small margin. Otherwise, the controls touch the border of the window and look unappealing.{{}} + You can preview your initial layout in MeVisLab by double-clicking your module {{< mousebutton "left" >}}. -![Initial Window Layout](images/tutorials/summary/Example3_1.png "Initial Window Layout") +![Initial window layout](images/tutorials/summary/Example3_1.png "Initial window layout") You can see the four vertical aligned parts as defined in the *.script* file. Now, we are going to add the content of the boxes. @@ -136,10 +139,10 @@ Window { Again, you can preview your user interface in MeVisLab directly. You can already select a file to open. The image is available at the output of the `LocalImage` module in your internal network but the viewers are missing in our interface. -![Source Box](images/tutorials/summary/Example3_2.png "Source Box") +![Source box](images/tutorials/summary/Example3_2.png "Source box") ##### Viewing -Add the two viewer modules to the *Viewing* section of your *.script* file and define their field as View2D.self and SoExaminerViewer.self. Set expandX = *Yes* and expandY = *Yes for both viewing modules. We want them to resize in the case the size of the Window changes. +Add the two viewer modules to the *Viewing* section of your *.script* file and define their field as View2D.self and SoExaminerViewer.self. Set expandX = *Yes* and expandY = *Yes for both viewing modules. We want them to resize in the case the size of the window changes. Set the 2D viewer's type to *SoRenderArea* and the 3D viewer's type to *SoExaminerViewer* and inspect your new user interface in MeVisLab. @@ -165,7 +168,7 @@ Set the 2D viewer's type to *SoRenderArea* and the 3D vie ``` {{}} -![2D and 3D Viewer](images/tutorials/summary/Example3_3.png "2D and 3D Viewer") +![2D and 3D viewer](images/tutorials/summary/Example3_3.png "2D and 3D viewer") The images selected in the *Source* section are shown in 2D and 3D. We simply reused the existing fields and viewers from your internal network and are already able to interact with the images. As the `View2D` of your internal network itself provides the possibility to accept markers and starts the `RegionGrowing`, this is also already possible and the segmentations are shown in 2D and 3D. @@ -224,7 +227,7 @@ Setting min and max is not necessa Add the field to the *Settings Box* and set step = *0.1* and slider = *Yes*. -For the `RegionGrowing` threshold, add the field thresholdInterval to *Parameters* section and set type = *Integer*, min = *1*, max = *100*, and internalName = RegionGrowing.autoThresholdIntervalSizeInPercent. +For the `RegionGrowing` threshold, add the field thresholdInterval to the *Parameters* section and set type = *Integer*, min = *1*, max = *100*, and internalName = RegionGrowing.autoThresholdIntervalSizeInPercent. {{}} Setting min and max is not necessary, it is inherited already. @@ -232,7 +235,7 @@ Setting min and max is not necessa Add the field to the *Settings* UI, and define step = *0.1* and slider = *Yes*. -Define a field isoValueImage in the *Parameters* section and set internalName = IsoSurfaceImage.isoValue, type = *Integer*, min = *1*, and max = *1000*. +Define a field isoValueImage in the *Parameters* section and set internalName = IsoSurfaceImage.isoValue, type = *Integer*, min = *1*, and max = *1000*. In the *Settings* section of the UI, set step = *2* and slider = *Yes*. @@ -333,7 +336,7 @@ Window { Your user interface of the macro module should now look similar to this: -![User Interface without Python Scripting](images/tutorials/summary/Example3_4.png "User Interface without Python Scripting") +![User interface without Python scripting](images/tutorials/summary/Example3_4.png "User interface without Python scripting") For the next elements, we require Python scripting. Nevertheless, you are already able to use your application and perform the basic functionalities without writing any line of code. @@ -345,13 +348,13 @@ Events can be raised by the user (e.g., by clicking a button) or by the applicat #### 3D Visualization Selection You will now add a selection possibility for the 3D viewer. This allows you to define the visibility of the 3D objects File, Segmented, or Both. -Add another field to your *Parameters* section. Define the field as selected3DView and set type = *Enum* and values to *Segmented*, *File* and *Both*. +Add another field to your *Parameters* section. Define the field as selected3DView and set type = *Enum* and values to *Segmented*, *File*, and *Both*. Add a *ComboBox* to your *Settings* and use the field name defined above. Set alignX = *Left* and editable = *No* and open the *Window* of the macro module in MeVisLab. -The values of the field can be selected, but nothing happens in our viewers. We need to implement a *FieldListener* in Python that reacts on any value changes of the field selected3DView. +The values of the field can be selected, but nothing happens in our viewers. We need to implement a *FieldListener* in the *.script* file that reacts on any value changes of the field selected3DView. -Open your script file and go to the *Commands* section. Add a *FieldListener* and reuse the name of our internal field selected3DView. Add a *Command* to the *FieldListener* calling a Python function *viewSelectionChanged*. +Open your script file and go to the *Commands* section. Add a *FieldListener* and reuse the name of our internal field selected3DView. Add a *Command* to the *FieldListener* calling a Python function viewSelectionChanged. {{< highlight filename=".script" >}} ```Stan @@ -365,7 +368,7 @@ Commands { ``` {{}} -Right-click {{< mousebutton "right" >}} the command select {{< menuitem "Create Python Function 'viewSelectionChanged'" >}}. MATE automatically opens the Python file of your macro module and creates a function *viewSelectionChanged*. +Right-click {{< mousebutton "right" >}} the command select {{< menuitem "Create Python Function 'viewSelectionChanged'" >}}. MATE automatically opens the Python file of your macro module and creates a function viewSelectionChanged. {{< highlight filename=".py" >}} ```Python @@ -386,7 +389,7 @@ The function sets the `SoSwitch` to the child value depending on the selected fi #### Setting the Marker The marker for the `RegionGrowing` is defined by the clicked position as Vector3. Add another field markerPosition to the *Parameters* section and define type = *Vector3*. -Then, add a trigger field applyMarker to your *Parameters* section. Set type = *Trigger* and title = *Add*. +Then, add a trigger field applyMarker to your *Parameters* section. Set type = *Trigger* and title = *Add*. {{< highlight filename=".script" >}} ```Stan @@ -443,13 +446,13 @@ def applyPosition(): Whenever the field markerPosition changes its value, the value is automatically applied to the SoView2DMarkerEditor.newPosXYZ. Clicking SoView2DMarkerEditor.add adds the new position to the `SoView2DMarkerEditor` and the region growing starts. {{}} -The *Field* SoView2DMarkerEditor.useInsertTemplate needs to be set to *True* in order to allow adding markers via Python. +The field SoView2DMarkerEditor.useInsertTemplate needs to be set to *True* in order to allow adding markers via Python. {{}} #### Reset Add a new field resetApplication to the *Parameters* section and set type = *Trigger* and title = *Reset*. -Add another *FieldListener* to your *Commands* and define command = *resetApplication*. +Add another *FieldListener* to your *Commands* section and define command = *resetApplication*. Add the field to your *Source* region. @@ -486,7 +489,7 @@ What shall happen when we reset the application? * The loaded image shall be unloaded, the viewer shall be empty * The marker shall be reset if available -Add the Python function *resetApplication* and implement the following: +Add the Python function resetApplication and implement the following: {{< highlight filename=".py" >}} ```Python from mevis import * @@ -498,7 +501,7 @@ def resetApplication(): ``` {{}} -You can also reset the application to initial state by adding a *initCommand* to your *Window*. Call the *resetApplication* function here, too, and whenever the window is opened, the application is reset to its initial state. +You can also reset the application to initial state by adding a *initCommand* to your *Window*. Call the resetApplication function here, too, and whenever the window is opened, the application is reset to its initial state. {{< highlight filename=".script" >}} ```Stan @@ -512,7 +515,7 @@ Window { ``` {{}} -This can also be used for setting/resetting to default values of the application. For example, update your Python function *resetApplication* the following way: +This can also be used for setting/resetting to default values of the application. For example, update your Python function resetApplication the following way: {{< highlight filename=".py" >}} ```Python @@ -530,7 +533,7 @@ def resetApplication(): {{}} ### Information -In the end, we want to provide some information about the volume of the segmented area (in ml). +In the end, we want to provide some information about the volume of the segmented area in milliliter. Add one more field to your *Parameters* section and reuse the internal network fields CalculateVolume.totalVolume. Set field to editable = *No*. @@ -538,12 +541,12 @@ Add the field to the *Info* section of your window. Opening the window of your macro module in MeVisLab now provides all functionalities we wanted to achieve. You can also play around in the window and define some additional boxes or other MDL controls but the basic application prototype is now finished. -![Final Macro module](images/tutorials/summary/Example3_5.png "Final Macro module") +![Final macro module](images/tutorials/summary/Example3_5.png "Final macro module") ### MeVisLab GUI Editor MATE provides a powerful GUI editor showing a preview of your current user interface and allowing to reorder elements in the UI via drag and drop. In MATE, open {{< menuitem "Extras" "Enable GUI Editor" >}}. -![MeVisLab GUI Editor](images/tutorials/summary/Example3_4b.png "MeVisLab GUI Editor") +![MeVisLab GUI editor](images/tutorials/summary/Example3_4b.png "MeVisLab GUI editor") Changing the layout via drag and drop automatically adapts your *.script* file. Save and reload the script and your changes are applied. @@ -723,7 +726,7 @@ def applyPosition(): ## Summary * You now added a user interface to your macro module. -* The window opens automatically on double-click {{< mousebutton "right" >}}. +* The window opens automatically on double-click {{< mousebutton "left" >}}. * Fields defined in the *Parameters* section can be modified in the MeVisLab Module Inspector. * Python allows to implement functions executed on events raised by the user or by the application itself. diff --git a/mevislab.github.io/content/tutorials/summary/summary4.md b/mevislab.github.io/content/tutorials/summary/summary4.md index 7adff65ab..67f0af9ac 100644 --- a/mevislab.github.io/content/tutorials/summary/summary4.md +++ b/mevislab.github.io/content/tutorials/summary/summary4.md @@ -51,16 +51,16 @@ Interface { You can now add a viewer or any other module to your macro module and use them for testing. In our example, we add a `CalculateVolume` module to the segmentation mask and a `SoCameraInteraction` with two `OffscreenRenderer` modules to the 3D output. In the end, we need an `ImageCompare` module to compare expected and real image in our test. -![Test Network](images/tutorials/summary/Example4_3.png "Test Network") +![Test network](images/tutorials/summary/Example4_3.png "Test network") ### Create Test Case -Open MeVisLab TestCaseManager via {{< menuitem "File" "Run TestCaseManager..." >}}. On the tab *Test Creation*, define a name of your test case, for example, *TutorialSummaryTest*. Select "Type" as *Macros*, define the package and use the same as for your macro module, select *Import Network*, and select your saved *.mlab* file from the step above. Click *Create*. +Open the TestCaseManager via {{< menuitem "File" "Run TestCaseManager..." >}} or by pressing {{< keyboard "Ctrl" "Alt" "T" >}}. On the tab *Test Creation*, define a name of your test case, for example, *TutorialSummaryTest*. Select Type as *Macros*, define the package and use the same as for your macro module, select *Import Network*, and select your saved *.mlab* file from the step above. Click Create. -![Test Creation](images/tutorials/summary/Example4_4.png "Test Creation") +![Test creation](images/tutorials/summary/Example4_4.png "Test creation") -MATE automatically opens the Python file of your test case and it appears in MeVisLab TestCaseManager. +MATE automatically opens the Python file of your test case and it appears in the TestCaseManager. -![Test Creation](images/tutorials/summary/Example4_5.png "Test Creation") +![Test is created and listed](images/tutorials/summary/Example4_5.png "Test is created and listed") ### Write Test Functions in Python @@ -92,7 +92,7 @@ def loadImage(full_path): ``` {{}} -We define the path to a file to be loaded. The function *loadImage* sets the openFile field of the `TutorialSummary` module. +We define the path to a file to be loaded. The function loadImage sets the openFile field of the `TutorialSummary` module. The arrays for the marker location and color will be used later. @@ -131,7 +131,7 @@ def setMarkerPosition(vector): ``` {{}} -The *setMarkerPosition* function gets a three-dimensional vector and sets the markerPosition field of our module. Then, the applyMarker trigger is touched. As the region growing algorithm might need some time to segment, we need to wait until the outSegmentationMask output field is valid, meaning that there is a valid segmentation mask at the segmentation mask output of our macro module. +The setMarkerPosition function gets a three-dimensional vector and sets the markerPosition field of our module. Then, the applyMarker trigger is touched. As the region growing algorithm might need some time to segment, we need to wait until the outSegmentationMask output field is valid, meaning that there is a valid segmentation mask at the segmentation mask output of our macro module. Finally, we need to reset the application to its initial state, so that each test case has the initial start conditions of the application. A test case should never depend on another test case so that they all can be executed exclusively. @@ -288,7 +288,7 @@ Again, we reset the application to an initial state, load the image, and set a m Finally, an image comparison is done for the 3D rendering using the old and the new color. The images shall differ. -The call *MLAB.processInventorQueue()* is sometimes necessary if an Open Inventor scene changed via Python scripting, because the viewers might not update immediately after changing the field. MeVisLab is now forced to process the queue in Open Inventor and to update the renderings. +The call MLAB.processInventorQueue() is sometimes necessary if an Open Inventor scene changed via Python scripting, because the viewers might not update immediately after changing the field. MeVisLab is now forced to process the queue in Open Inventor and to update the renderings. #### Requirement 8: The total volume of the segmented volume shall be calculated and shown (in ml) For the correctness of the volume calculation, we added the `CalculateVolume` module to our test network. The volume given by our macro is compared to the volume of the segmentation from output outSegmentationMask calculated by the `CalculateVolume` module. @@ -330,7 +330,7 @@ def TEST_VolumeCalculation(): ##### Requirement 9.2: Segmentation results ##### Requirement 9.3: All -In the end, we want to develop a testcase for the 3D toggling of the view. We cannot exactly test if the rendering is correct; therefore, we will check if the 3D rendering image changes when toggling the 3D view. We will use the modules `OffscreenRenderer`, `ImageCompare`, and `SoCameraInteraction`, which we added to our test network. +In the end, we want to develop a test case for the 3D toggling of the view. We cannot exactly test if the rendering is correct; therefore, we will check if the 3D rendering image changes when toggling the 3D view. We will use the modules `OffscreenRenderer`, `ImageCompare`, and `SoCameraInteraction`, which we added to our test network. Initially, without any marker and segmentation, the views *Both* and *Head* show the same result. After adding a marker, we are going to test if different views result in different images. @@ -383,13 +383,13 @@ def TEST_Toggle3DVolumes(): {{}} ### Sorting Order in TestCaseManager -The MeVisLab TestCaseManager sorts your test cases alphabetically. Your test cases should look like this now: +The TestCaseManager sorts your test cases alphabetically. Your test cases should look like this now: -![TestCaseManager Sorting](images/tutorials/summary/Example4_6.png "TestCaseManager Sorting") +![TestCaseManager sorting](images/tutorials/summary/Example4_6.png "TestCaseManager sorting") Generally, test cases should not depend on each other and the order of their execution should not matter. Sometimes it makes sense though to execute tests in a certain order, for example, for performance reasons. In this case, you can add numeric prefixes to your test cases. This might look like this then: -![TestCaseManager Custom Sorting](images/tutorials/summary/Example4_7.png "TestCaseManager Custom Sorting") +![TestCaseManager custom sorting](images/tutorials/summary/Example4_7.png "TestCaseManager custom sorting") ### Not Testable Requirements As already mentioned, some requirements cannot be tested in an automated environment. Human inspection cannot be replaced completely. @@ -425,7 +425,7 @@ Logging.showFile("Link to screenshot file", result) * Testcase numbering allows you to sort them and define execution order. {{}} -Additional information about MeVisLab TestCenter can be found in {{< docuLinks "/Resources/Documentation/Publish/SDK/TestCenterManual/index.html" "TestCenter Manual" >}} +Additional information about the MeVisLab TestCenter can be found in {{< docuLinks "/Resources/Documentation/Publish/SDK/TestCenterManual/index.html" "TestCenter Manual" >}} {{}} {{< networkfile "examples/summary/TutorialSummaryTest.zip" >}} diff --git a/mevislab.github.io/content/tutorials/summary/summary5.md b/mevislab.github.io/content/tutorials/summary/summary5.md index 648f13f0e..da797f2ea 100644 --- a/mevislab.github.io/content/tutorials/summary/summary5.md +++ b/mevislab.github.io/content/tutorials/summary/summary5.md @@ -20,8 +20,10 @@ menu: ## Introduction Your macro module has been tested manually and/or automatically? Then, you should create your first installable executable and deliver it to your customer(s) for final evaluation. + + {{}} -This step requires a valid **MeVisLab ApplicationBuilder** license. It extends the **MeVisLab SDK**, so that you can generate an installer of your developed macro module. +This step requires a valid **MeVisLab ApplicationBuilder** license. It extends the **MeVisLab SDK**, so that you can generate an installer out of your developed macro module. Free evaluation licenses of the **MeVisLab ApplicationBuilder**, time-limited to three months, can be requested at [sales(at)mevislab.de](mailto://sales@mevislab.de). {{}} @@ -33,40 +35,40 @@ The MeVisLab Project Wizard for standalone applications {{Check if required tools are installed. The following dialog opens: ![Check required tools](images/tutorials/summary/Example5_2.png "Check required tools") -You can see that [NSIS](https://nsis.sourceforge.io/Download) and either [Dependency Walker](http://www.dependencywalker.com/) or [Dependencies](https://github.com/lucasg/Dependencies) are necessary to create an installable executable. MeVisLab provides information about the necessary version(s). +You can see that [NSIS](https://nsis.sourceforge.io/Download) and either [Dependency Walker](https://www.dependencywalker.com/) or [Dependencies](https://github.com/lucasg/Dependencies) are necessary to create an installable executable. MeVisLab provides information about the necessary version(s). Download and install/extract *NSIS* and *Dependency Walker* or *Dependencies*. Add both executables to your *PATH* environment variable, for example, *C:\Program Files\depends* and *C:\Program Files (x86)\NSIS*. -Restart MeVisLab and open Project Wizard again. All required tools should now be available. +Restart MeVisLab and open the Project Wizard again. All required tools should now be available. ### Use MeVisLab Project Wizard to Generate the Installer -Select your macro module and the package and click *Next*. +Select your macro module and the package and click Next >. ![Welcome](images/tutorials/summary/Example5_3.png "Welcome") The general settings dialog allows you to define a name for your application. You can also define a version, in our case, we decide not to be finished and have a version *0.5*. You can include debug files and decide to build a desktop or web application. We want to build an *Application Installer* for a desktop system. You can decide to precompile your Python files and you have to select your MeVisLab **MeVisLab ApplicationBuilder** license. -![General Settings](images/tutorials/summary/Example5_4.png "General Settings") +![General settings](images/tutorials/summary/Example5_4.png "General settings") Define your license text that is shown during installation of your executable. You can decide to use our predefined text, select a custom file, or do not include any license text. -![License Text](images/tutorials/summary/Example5_5.png "License Text") +![License text](images/tutorials/summary/Example5_5.png "License text") The next dialog can be skipped for now, you can include additional files into your installer that are not automatically added by MeVisLab from the dependency analysis. -![Manual File Lists](images/tutorials/summary/Example5_6.png "Manual File Lists") +![Manual file lists](images/tutorials/summary/Example5_6.png "Manual file lists") Define how the window of your application shall look. -![Application Options](images/tutorials/summary/Example5_7.png "Application Options") +![Application options](images/tutorials/summary/Example5_7.png "Application options") Skip the next dialog, we do not need additional installer options. -![Installer Options](images/tutorials/summary/Example5_8.png "Installer Options") +![Installer options](images/tutorials/summary/Example5_8.png "Installer options") The MeVisLab ToolRunner starts generating your installer. After finishing installer generation, you will find a link to the target directory. @@ -90,8 +92,8 @@ The *.mlinstall* file provides all information you just entered into the wizard. The file is initially generated by the Project Wizard. Having a valid file already, you can create new versions by using the MeVisLab ToolRunner. -#### Shell Skript -The shell skript allows you to generate the executable again via a Unix shell like bash. You do not need the Project Wizard anymore now. +#### Shell Script +The shell script allows you to generate the executable again via a Unix shell such as Bash. You do not need the Project Wizard anymore now. #### Software Bill of Materials [SBOM] The SBOM file includes a list of all third-party components, libraries, and dependencies included into your installer by MeVisLab. We use the standard format *CycloneDX* that allows to import this file to standard evaluation tools like [Dependency-Track](https://dependencytrack.org). @@ -105,7 +107,7 @@ The installer initially shows a welcome screen showing the name and version of y Next, you will see your selected license agreement from the project wizard and a selection to install for anyone or just for the current user. -![License Agreement](images/tutorials/summary/Example5_11.png "License Agreement") +![License agreement](images/tutorials/summary/Example5_11.png "License agreement") You can also select to create shortcuts and desktop icons. @@ -125,11 +127,11 @@ MeVisLab executables require an additional **MeVisLab Runtime** license. It make Free evaluation licenses of the **MeVisLab ApplicationBuilder** and **MeVisLab Runtime** licenses for testing purposes can be requested at [sales(at)mevislab.de](mailto://sales@mevislab.de). {{}} -![Runtime License](images/tutorials/summary/Example5_14.png "Runtime License") +![Runtime license](images/tutorials/summary/Example5_14.png "Runtime license") After entering your license file, the application runs and you can use it on a customer system. -![Installed Application](images/tutorials/summary/Example5_15.png "Installed Application") +![Installed application](images/tutorials/summary/Example5_15.png "Installed application") {{}} By default, your user interface uses a standard stylesheet for colors and appearance of your user interface elements. The style can be customized easily. @@ -138,4 +140,4 @@ By default, your user interface uses a standard stylesheet for colors and appear ## Summary * The **MeVisLab ApplicationBuilder** allows you to create installable executables from your MeVisLab networks. * The resulting application can be customized to your needs via the Project Wizard. -* Your application will be licensed separately, so that you can completely control the usage. +* Your application will be licensed separately so that you can completely control the usage. diff --git a/mevislab.github.io/content/tutorials/summary/summary6.md b/mevislab.github.io/content/tutorials/summary/summary6.md index e737e162d..35c58d6f4 100644 --- a/mevislab.github.io/content/tutorials/summary/summary6.md +++ b/mevislab.github.io/content/tutorials/summary/summary6.md @@ -63,7 +63,7 @@ Window { Back in MeVisLab IDE, your user interface should now provide the possibility to define an alpha value of the overlay. Changes are applied automatically because you reused the field of the `SoView2DOverlay` module directly. -![Updated User Interface](images/tutorials/summary/Example6_1.png "Updated User Interface") +![Updated user interface](images/tutorials/summary/Example6_1.png "Updated user interface") You can also update your Python files for new or updated requirements. In this example we just want to show the basic principles; therefore, we only add this new element to the *.script* file. diff --git a/mevislab.github.io/content/tutorials/summary/summary7.md b/mevislab.github.io/content/tutorials/summary/summary7.md index 41f229a02..44bfa72ab 100644 --- a/mevislab.github.io/content/tutorials/summary/summary7.md +++ b/mevislab.github.io/content/tutorials/summary/summary7.md @@ -26,7 +26,7 @@ In this step you are recreating your application installer after changing the UI You do not need to use the Project Wizard now, because you already have a valid *.mlinstall* file. The location should be in your package under *.\Configuration\Installers\TutorialSummary*. Open the file in any text editor and search for the *$VERSION 0.5*. Change the version to something else, in our case, we now have our first major release 1.0. {{}} -You can also run the Project Wizard again but keep in mind that manual changes on your *.mlinstall* file might be overwritten. The wizard recreates your *.mlinstall* file whereas the ToolRunner just uses it. +You can also run the Project Wizard again but keep in mind that manual changes on your *.mlinstall* file might be overwritten. The Wizard recreates your *.mlinstall* file whereas the ToolRunner just uses it. {{}} ### Use MeVisLab ToolRunner @@ -34,14 +34,14 @@ Save the file and open *MeVisLab ToolRunner*. ![MeVisLab ToolRunner](images/tutorials/summary/Example7_1.png "MeVisLab ToolRunner") -Open the *.mlinstall* file in ToolRunner and select the file. Click *Run on Selection*. +Open the *.mlinstall* file in ToolRunner and select the file. Click Run on Selection or just double-click {{< mousebutton "left" >}} the installer entry. ![Run on Selection](images/tutorials/summary/Example7_2.png "Run on Selection") The ToolRunner automatically builds your new installer using version 1.0. ### Install Application Again -Execute your installable executable again. You do not have to uninstall previous version(s) of your application first. Already existing applications will be replaced by new installation - at least if you select the same target directory. +Execute your installable executable again. You do not have to uninstall previous version(s) of your application first. Already existing applications will be replaced by new installation — at least if you select the same target directory. ![Install new version](images/tutorials/summary/Example7_3.png "Install new version") diff --git a/mevislab.github.io/content/tutorials/summary/summary8.md b/mevislab.github.io/content/tutorials/summary/summary8.md index b2235d823..695c12364 100644 --- a/mevislab.github.io/content/tutorials/summary/summary8.md +++ b/mevislab.github.io/content/tutorials/summary/summary8.md @@ -20,6 +20,8 @@ menu: ## Introduction This step explains how to run your developed application in a browser. The MeVisLab network remains the same, only some adaptations are necessary for running any macro module in a browser window. + + {{}} This step requires a valid **MeVisLab Webtoolkit** license. It extends the **MeVisLab SDK**, so that you can develop web macro modules. @@ -36,11 +38,11 @@ Open Project Wizard via {{< menuitem "File" "Run Project Wizard..." >}} and sele Run the Wizard and enter details of your web macro module. -![Web macro module properties](images/tutorials/summary/Example8_2.png "Web macro module properties") +![Web macro: module properties](images/tutorials/summary/Example8_2.png "Web macro: module properties") -Click *Next* and select optional web plugin features. Click *Create*. +Click Next > and select optional web plugin features. Click Create. -![Web macro module](images/tutorials/summary/Example8_3.png "Web macro module") +![Web macro module: plugin](images/tutorials/summary/Example8_3.png "Web macro module: plugin") The folder of your project automatically opens in an Explorer window. @@ -58,11 +60,11 @@ Open the internal network of your previously created macro module from [Step 2]( We are going to develop a web application; therefore, we need special `RemoteRendering` modules for the viewer. Add two `RemoteRendering` modules and a `SoCameraInteraction` to your workspace and connect them to your existing modules as seen below. -![Remote Rendering](images/tutorials/summary/Example8_5b.png "Remote Rendering") +![Remote rendering modules](images/tutorials/summary/Example8_5b.png "Remote rendering modules") -{{}} +{{< alert class="info" caption="Additional Info" >}} We are using the hidden outputs of the `View2D` and the `SoExaminerViewer`. You can show them by pressing the *SPACE* key. -{{}} +{{< /alert >}} #### Develop the User Interface Make sure to have both macro modules visible in MeVisLab SDK, we are reusing the *.script* and *.py* files developed in [Step 3](tutorials/summary/summary3/). @@ -94,7 +96,7 @@ Web { ``` {{}} -Open the script file of the *TutorialSummary* module from [Step 3](tutorials/summary/summary3/). Copy the output section to your web macro and define internalName as the output of your `RemoteRendering` modules. +Open the script file of the `TutorialSummary` module from [Step 3](tutorials/summary/summary3/). Copy the output section to your web macro and define internalName as the output of your `RemoteRendering` modules. You can also copy all fields from *Parameters* section to your web macro module script. @@ -167,11 +169,11 @@ Interface { Reloading your web macro in MeVisLab SDK now shows the same outputs as the original macro module. The only difference is the type of your output. It changed from MLImage and Open Inventor scene to MLBase from your `RemoteRendering` modules. -![Macro modules](images/tutorials/summary/Example8_7.png "Macro modules") +![Macro modules: with RemoteRendering outputs](images/tutorials/summary/Example8_7.png "Macro modules: with RemoteRendering outputs") The internal network of your web macro should look like this: -![Macro modules](images/tutorials/summary/Example8_8.png "Macro modules") +![Macro modules: internal network](images/tutorials/summary/Example8_8.png "Macro modules: internal network") You can emulate the final viewer by adding a `RemoteRenderingClient` module to the outputs of your web macro. @@ -280,7 +282,7 @@ MLABRemote.setup(ctx) ``` {{}} -Copy the Python functions from *TutorialSummary.py* to *TutorialSummaryBrowser.py*. They can remain unchanged but require an additional *@allowedRemoteCall* function. This is necessary to explicitly allow remote execution of the function and is disabled by default for security reasons. +Copy the Python functions from *TutorialSummary.py* to *TutorialSummaryBrowser.py*. They can remain unchanged but require an additional @allowedRemoteCall function. This is necessary to explicitly allow remote execution of the function and is disabled by default for security reasons. {{< highlight filename="TutorialSummaryBrowser.py" >}} ```Python diff --git a/mevislab.github.io/content/tutorials/testing.md b/mevislab.github.io/content/tutorials/testing.md index ca149c67b..01b26c3a0 100644 --- a/mevislab.github.io/content/tutorials/testing.md +++ b/mevislab.github.io/content/tutorials/testing.md @@ -16,19 +16,18 @@ menu: # MeVisLab Tutorial Chapter VI {#TutorialChapter6} ## Testing, Profiling, and Debugging in MeVisLab {#TutorialTesting} - The MeVisLab Integrated Development Environment (IDE) provides tools to write automated tests in Python, to profile your network performance, and to debug your Python code. All of these funtionalities will be addressed in this chapter. ### Testing -The MeVisLab TestCenter is the starting point of your tests. Select {{}} to open the user interface of the TestCaseManager. +The *MeVisLab TestCenter* is the starting point of your tests. Select {{}} or press {{< keyboard "Ctrl" "Alt" "T">}} to open the user interface of the `TestCaseManager`. ![MeVisLab TestCaseManager](images/tutorials/testing/TestCaseManager.png "MeVisLab TestCaseManager") #### Test Selection -The Test Selection allows you to define a selection of test cases to be executed. The list can be configured by defining a filter, manually selecting the packages ([see Example 2.1: Package Creation](tutorials/basicmechanisms/macromodules/package)) to be scanned for test cases. All test cases found in the selected packages are shown. +The *Test Selection* allows you to define a selection of test cases to be executed. The list can be configured by defining a filter, manually selecting the packages ([see Example 2.1: Package Creation](tutorials/basicmechanisms/macromodules/package)) to be scanned for test cases. All test cases found in the selected packages are shown. -On the right side of the Test Selection tab, you can see a list of functions in the test case. Each list entry is related to a Python function. You can select the functions to be executed. If your test case contains a network, you can open the *.mlab* file or edit the Python file in MATE. +On the right side of the *Test Selection* tab, you can see a list of functions in the test case. Each list entry is related to a Python function. You can select the functions to be executed. If your test case contains a network, you can open the *.mlab* file or edit the Python file in MATE. #### Test Reports The results of your tests are shown as a report after execution. @@ -45,11 +44,11 @@ If you have multiple versions installed, make sure to check and, if needed, alte ### Profiling Profiling allows you to get detailed information on the behavior of your modules and networks. You can add the Profiling view via {{}}. The Profiling will be displayed in the Views area of the MeVisLab IDE. -![MeVisLab Profiling](images/tutorials/testing/Profiling.png "MeVisLab Profiling") +![MeVisLab profiling](images/tutorials/testing/Profiling.png "MeVisLab profiling") With enabled profiling, your currently opened network will be inspected and the CPU and memory usage and many more details of each module and function are logged. ### Debugging Debugging can be enabled whenever the integrated text editor MATE is opened. Having a Python file opened, you can enable debugging via {{}}. You can define break points in Python, add variables to your watchlist, and walk through your break points just like in other editors and debuggers. -![MeVisLab Debugging](images/tutorials/testing/MATE_debugging.png "MeVisLab Debugging") +![MeVisLab debugging](images/tutorials/testing/MATE_debugging.png "MeVisLab debugging") diff --git a/mevislab.github.io/content/tutorials/testing/testingexample1.md b/mevislab.github.io/content/tutorials/testing/testingexample1.md index 9d9779287..5ec535c27 100644 --- a/mevislab.github.io/content/tutorials/testing/testingexample1.md +++ b/mevislab.github.io/content/tutorials/testing/testingexample1.md @@ -18,15 +18,15 @@ menu: {{< youtube "DqpVaKai_00" >}} ## Introduction -In this example you will learn how to write an automated test for a simple network using the `DicomImport`, `MinMaxScan`, and `View3D` modules. Afterward, you will be able to write test cases for any other module and network yourself. +In this example, you will learn how to write an automated test for a simple network using the `DicomImport`, `MinMaxScan`, and `View3D` modules. Afterward, you will be able to write test cases for any other module and network yourself. MeVisLab provides two options to compare a test result with an expected result: #### ASSERT Multiple **ASSERT_*** functions to compare expected and actual result are available, for example **ASSERT_EQ()** (check if two values are equal) or **ASSERT_GT()** (check if value is greater than another value). -In case an assertion fails, an exception is thrown and the test execution stops. +In the case an assertion fails, an exception is thrown and the test execution stops. #### EXPECT -The same comparisons can be done by using **EXPECT_***. The functions return *true* or *false* and depending on the result you can decide how to proceed. +The same comparisons can be done by using **EXPECT_***. The functions return *true* or *false* and depending on the result, you can decide how to proceed. Make sure to use the right comparison methods depending on your needs. @@ -39,14 +39,14 @@ Additional information can be found in {{< docuLinks "/Resources/Documentation/P ### Creating the Network to be Used for Testing Add the following modules to your workspace and connect them as seen below: -![Testcase network ](images/tutorials/testing/testNetwork1.png "Testcase network ") +![Testcase network](images/tutorials/testing/testNetwork1.png "Testcase network") Save your network as *NetworkTestCase.mlab*. ## Test Creation -Open the MeVisLab TestCaseManager via menu {{}}. The following window will appear. +Open the TestCaseManager via menu {{}} or by pressing {{< keyboard "Ctrl" "Alt" "T">}}. The following window will appear. -![TestCaseManager window ](images/tutorials/testing/testCaseManagerWindow.png "TestCaseManager window ") +![TestCaseManager window](images/tutorials/testing/testCaseManagerWindow.png "TestCaseManager window") Change to the *Test Creation* tab and enter details of your test case as seen below. Make sure to have a package available already. @@ -56,9 +56,9 @@ Details on package creation can be found in [Example 2.1: Package creation](./tu Select your saved *NetworkTestCase.mlab* file. - ![Test Creation window ](images/tutorials/testing/TestCreation.png "Test Creation window ") + ![Test Creation window](images/tutorials/testing/TestCreation.png "Test Creation window") -Click *Create*. The MeVisLab text editor MATE will automatically open and display the Python file of your test. Add the below listed code to the Python file. +Click Create. The MeVisLab text editor MATE will automatically open and display the Python file of your test. Add the below listed code to the Python file. {{< highlight filename="NetworkTestCase.py" >}} ```Python @@ -66,7 +66,7 @@ from mevis import * from TestSupport import Base, Fields, Logging from TestSupport.Macros import * -filePath="C:/Program Files/MeVisLab3.6.0/Packages/MeVisLab/Resources/DemoData/BrainT1Dicom" +filePath="C:/Program Files//Packages/MeVisLab/Resources/DemoData/BrainT1Dicom" def OpenFiles(): ctx.field("DicomImport.inputMode").value = "Directory" @@ -77,21 +77,21 @@ def OpenFiles(): MLAB.sleep(1) Base.ignoreWarningAndError(MLAB.processEvents) ctx.field("DicomImport.selectNextItem").touch() - MLAB.log("Files imported from: "+ctx.field("DicomImport.source").value) + MLAB.log("Files imported from: " + ctx.field("DicomImport.source").value) def TEST_DicomImport(): - expectedValue=1.0 + expectedValue = 1.0 OpenFiles() - currentValue=ctx.field("DicomImport.progress").value - ASSERT_FLOAT_EQ(expectedValue,currentValue) + currentValue = ctx.field("DicomImport.progress").value + ASSERT_FLOAT_EQ(expectedValue, currentValue) ``` {{}} -The *filePath* variable defines the absolute path to the DICOM files that will be given to source field of the `DicomImport` module in the second step of the *OpenFiles* function. +The filePath variable defines the absolute path to the DICOM files that will be given to source field of the `DicomImport` module in the second step of the OpenFiles function. -The *OpenFiles* function first defines the `DicomImport` field inputMode to be a *Directory*. If you want to open single files, set this field's value to *Files*. Then, the source field is set to your previously defined filePath. After clicking triggerImport, the `DicomImport` module needs some time to load all images in the directory and process the DICOM tree. We have to wait until the field ready is *True*. While the import is not ready yet, we wait for 1 millisecond at a time and check again. *MLAB.processEvents()* lets MeVisLab continue execution while waiting for the `DicomImport` to be ready. +The OpenFiles function first defines the `DicomImport` field inputMode to be a *Directory*. If you want to open single files, set this field's value to *Files*. Then, the source field is set to your previously defined filePath. After clicking triggerImport, the `DicomImport` module needs some time to load all images in the directory and process the DICOM tree. We have to wait until the field ready is *True*. While the import is not ready yet, we wait for 1 millisecond at a time and check again. MLAB.processEvents() lets MeVisLab continue execution while waiting for the `DicomImport` to be ready. -When calling the function *TEST_DicomImport*, an expected value of 1.0 is defined. Then, the DICOM files are opened. +When calling the function TEST_DicomImport, an expected value of *1.0* is defined. Then, the DICOM files are opened. {{}} Call Base.ignoreWarningAndError(MLAB.processEvents) instead of MLAB.processEvents() if you receive error messages regarding invalid DICOM tags. @@ -99,14 +99,14 @@ Call Base.ignoreWarningAndError(MLAB.processEvents) ins When ready is true, the test touches the selectNextItem trigger, so that the first images of the patient are selected and shown. The source directory will be written on the console as an additional log message for informative purposes. -The value of our `DicomImport`s progress field is saved as the *currentValue* variable and compared to the *expectedValue* variable by calling *ASSERT_FLOAT_EQ(expectedValue,currentValue)* to determine if the DICOM import has finished (*currentValue* and *expectedValue* are equal) or not. +The value of our `DicomImport`s progress field is saved as the currentValue variable and compared to the expectedValue variable by calling ASSERT_FLOAT_EQ(expectedValue, currentValue) to determine if the DICOM import has finished (currentValue and expectedValue are equal) or not. You can play around with the differences between **ASSERT_FLOAT_EQ()** and **EXPECT_FLOAT_EQ()** and let your test fail to see the differences. ### Run Your Test Case -Open the TestCase Manager und run your test by selecting your test case and clicking on the *Play* button in the bottom right corner. +Open the TestCaseManager und run your test by selecting your test case and clicking {{< mousebutton "left" >}} on the Play button in the bottom right corner. -![Run Test Case](images/tutorials/testing/runTestCase.png "Run Test Case") +![Run test case](images/tutorials/testing/runTestCase.png "Run test case") After execution, the ReportViewer will open automatically displaying your test's results. @@ -126,7 +126,7 @@ Please observe that field access through Python scripting works differently for ``` {{}} -*Imagine unpeeled nuts in a bag as a concept - the field as a nut, their module as their nutshell, and the bag as the global macro.* +*Imagine unpeeled nuts in a bag as a concept — the field as a nut, their module as their nutshell, and the bag as the global macro.* {{}} [Example 2.2: Global macro modules](tutorials/basicmechanisms/macromodules/globalmacromodules/) provides additional info on global macro modules and their creation. @@ -135,7 +135,7 @@ Please observe that field access through Python scripting works differently for ## Exercise Create a global macro module and implement the following test objectives for both (network and macro module): * Check if the file exists. -* Check if the max value of file is greater than zero. +* Check if the maximum voxel value of an image file is greater than zero. * Check if the `View3D` input and `DicomImport` output have the same data. ## Summary @@ -143,6 +143,6 @@ Create a global macro module and implement the following test objectives for bot * Tests can be executed on networks and macro modules. * The test results are shown in a ReportViewer. * **ASSERT*** functions throw an exception if the expected result differs from the actual result. The test run is aborted in such a case. -* **EXPECT*** functions return *true* or *false*. You can decide yoursel how to continue your test. +* **EXPECT*** functions return *true* or *false*. You can decide yourself how to continue your test. {{< networkfile "examples/testing/example1/TestCases.zip" >}} diff --git a/mevislab.github.io/content/tutorials/testing/testingexample2.md b/mevislab.github.io/content/tutorials/testing/testingexample2.md index f84e1d824..e330d7bfc 100644 --- a/mevislab.github.io/content/tutorials/testing/testingexample2.md +++ b/mevislab.github.io/content/tutorials/testing/testingexample2.md @@ -25,19 +25,19 @@ In this example we are using the MeVisLab Profiler to inspect the memory and CPU ### Creating the Network to be Used for Profiling You can open any network you like, here we are using the example network of the module `MinMaxScan` for profiling. Add the module `MinMaxScan` to your workspace, open the example network via right-click {{}} and select {{}}. -![MinMaxScan Example Network](images/tutorials/testing/profiling_network.png "MinMaxScan Example Network") +![MinMaxScan example network](images/tutorials/testing/profiling_network.png "MinMaxScan example network") ### Enable Profiling Next, enable the MeVisLab Profiler via menu item {{}}. The Profiler is opened in your views area but can be detached and dragged over the workspace holding the left mouse button {{}}. -![MeVisLab Profiling](images/tutorials/testing/Profiling.png "MeVisLab Profiling") +![MeVisLab profiling](images/tutorials/testing/Profiling.png "MeVisLab profiling") Enable profiling by checking *Enable* in the top left corner of the Profiling window. ### Inspect Your Network -Now open the `View2D` module's panel via double-click and scroll through the slices. Inspect the Profiler. +Now open the `View2D` module's panel via double-click {{< mousebutton "left" >}} and scroll through the slices. Inspect the Profiler. -![MeVisLab Profiling Network](images/tutorials/testing/Profiling_Network1.png "MeVisLab Profiling Network") +![MeVisLab Profiling Network](images/tutorials/testing/Profiling_Network1.png "MeVisLab profiling network") The Profiler shows detailed information about each module in your network. @@ -48,18 +48,18 @@ Also, filtering by module name is handy when you are working with larger network Field values and their changes for all modules in your network can be inspected in the *Fields* tab: -![MeVisLab Profiling Fields](images/tutorials/testing/Profiling_Network2.png "MeVisLab Profiling Fields") +![MeVisLab profiling fields](images/tutorials/testing/Profiling_Network2.png "MeVisLab profiling fields") -In addition to the Profiler window, your modules also provide a tiny bar indicating their current memory and time consumption. +In addition to the Profiler window, your modules also provide a tiny bar indicating their current memory and time consumption relative to the other modules in the network. -![MeVisLab Profiling Module](images/tutorials/testing/Module_Info.png "MeVisLab Profiling Module") +![MeVisLab profiling module](images/tutorials/testing/Module_Info.png "MeVisLab profiling module") {{}} More information about profiling in MeVisLab can be found {{< docuLinks "/Resources/Documentation/Publish/SDK/MeVisLabManual/ch17.html" "here">}} {{}} {{}} -You need to uncheck the *Enable* checkbox in the top left corner to stop profiling. Closing the window will not automatically end the profiling. +You need to uncheck the Enable checkbox in the top left corner to stop profiling. Closing the window will not automatically end the profiling. {{}} ## Summary diff --git a/mevislab.github.io/content/tutorials/testing/testingexample3.md b/mevislab.github.io/content/tutorials/testing/testingexample3.md index 2c6801b7f..f3b5aefed 100644 --- a/mevislab.github.io/content/tutorials/testing/testingexample3.md +++ b/mevislab.github.io/content/tutorials/testing/testingexample3.md @@ -18,26 +18,26 @@ menu: {{}} ## Introduction -In this example you are writing an iterative test. Iterative test functions run a function for every specified input. They return a tuple consisting of the function object called and the inputs iterated over. The iterative test functions are useful if the same function should be applied to different input data. These could be input values, names of input images, etc. +In this example, you are writing an iterative test. Iterative test functions run a function for every specified input. They return a tuple consisting of the inputs iterated over and the function object called. The iterative test functions are useful if the same function should be applied to different input data. These could be input values, names of input images, etc. ## Steps to Do ### Creating the Network to be Used for Testing Add a `LocalImage` and a `DicomTagViewer` module to your workspace and connect them. -![Example Network](images/tutorials/testing/network_test3.png "Example Network") +![Example network](images/tutorials/testing/network_test3.png "Example network") ### Test Case Creation -Open the panel of the `DicomTagViewer` and set *Tag Name* to *WindowCenter*. The value of the DICOM tag from the current input image is automatically set as value. +Open the panel of the `DicomTagViewer` and set Tag Name to *WindowCenter*. The value of the DICOM tag from the current input image is automatically set as value. Save the network. -Start MeVisLab TestCaseManager and create a new test case called *IterativeTestCase* as seen in [Example 1: Writing a simple testcase in MeVisLab](tutorials/testing/testingexample1). +Start the TestCaseManager and create a new test case called *IterativeTestCase* as seen in [Example 1: Writing a Simple Ttest Case in MeVisLab](tutorials/testing/testingexample1). ![DicomTagViewer](images/tutorials/testing/DicomTagViewer.png "DicomTagViewer") ### Defining the Test Data -In TestCaseManager open the test case Python file via *Edit File*. +In the TestCaseManager, open the test case Python file via Edit File. Add a list for test data to be used as input and a prefix for the path of the test data as seen below. @@ -65,7 +65,7 @@ def ITERATIVETEST_TestWindowCenter(): ``` {{}} -This function defines that *testPatient* shall be called for each entry available in the defined list *testData*. Define the function *testPatient*: +This function defines that testPatient shall be called for each entry available in the defined list testData. Define the function testPatient: {{< highlight filename="IterativeTestCase.py" >}} ```Python def testPatient(path, windowCenter): @@ -80,27 +80,27 @@ def testPatient(path, windowCenter): 1. Initially, the path and filename for the module `LocalImage` are set. The data is loaded automatically, because the module has the AutoLoad flag enabled by default. ![LocalImage](images/tutorials/testing/LocalImage.png "LocalImage") -2. Then, the DICOM tree of the loaded file is used to get the *WindowCenter* value (*importValue*). -3. The previously defined value of the `DicomTagViewer` is set as *dicomValue*. +2. Then, the DICOM tree of the loaded file is used to get the *WindowCenter* value (importValue). +3. The previously defined value of the `DicomTagViewer` is set as dicomValue. 4. The final test functions *ASSERT_EQ* evaluate if the given values are equal. {{}} -You can use many other **ASSERT*** possibilities, just try using the MATE autocompletion and play around with them. **ASSERT*** functions throw an exception in case expected and actul values do not fit. Your test execution stops in this case. +You can use many other **ASSERT*** possibilities, just try using the MATE autocompletion and play around with them. **ASSERT*** functions throw an exception in the case expected and actul values do not fit. Your test execution stops in this case. -You can also use **EXPECT*** functions. They return *true* or *false* and you can decide yourself ho your test continues. +You can also use **EXPECT*** functions. They return *true* or *false* and you can decide yourself how your test continues. For details, see {{< docuLinks "/Resources/Documentation/Publish/SDK/TestCenterReference/namespaceTestSupport_1_1Macros.html" "TestCenter Reference" >}} {{}} ### Run Your Iterative Test -Open MeVisLab TestCase Manager and select your package and test case. You will see two test functions on the right side. +Open the TestCaseManager and select your package and test case. You will see two test functions on the right side. -![Iterative Test](images/tutorials/testing/TestCaseManager_TestWindowCenter.png "Iterative Test") +![Iterative test](images/tutorials/testing/TestCaseManager_TestWindowCenter.png "Iterative test") -The identifiers of your test functions are shown as defined in the list (*ProbandT1/2*). The *TestWindowCenter* now runs for each entry in the list and calls the function *testPatient* for each entry using the given values. +The identifiers of your test functions are shown as defined in the list (*ProbandT1/2*). The *TestWindowCenter* now runs for each entry in the list and calls the function testPatient for each entry using the given values. ### Adding Screenshots to Your TestReport -Now, extend your network by adding a `View2D` module and connect it with the `LocalImage` module. Add the following lines to the end of your function *testPatient*: +Now, extend your network by adding a `View2D` module and connect it with the `LocalImage` module. Add the following lines to the end of your function testPatient: {{< highlight filename="IterativeTestCase.py" >}} ```Python def testPatient(path, windowCenter): diff --git a/mevislab.github.io/content/tutorials/thirdparty.md b/mevislab.github.io/content/tutorials/thirdparty.md index 59e254d44..2fe63e832 100644 --- a/mevislab.github.io/content/tutorials/thirdparty.md +++ b/mevislab.github.io/content/tutorials/thirdparty.md @@ -16,7 +16,7 @@ menu: # MeVisLab Tutorial Chapter VIII {#TutorialChapter8} ## Using Third-party Software Integrated into MeVisLab {#TutorialThirdParty} -MeVisLab is equipped with a lot of useful software right out of the box, like the Insight Segmentation and Registration Toolkit (ITK) or the Visualization Toolkit (VTK). This chapter works as a guide on how to use some of the third-party components integrated in MeVisLab for your projects via Python scripting. +MeVisLab is equipped with a lot of useful software right out of the box, like the Insight Segmentation and Registration Toolkit (ITK) or the Visualization Toolkit (VTK). This chapter is intended as a guide on how to use some of the third-party components integrated in MeVisLab for your projects via Python scripting. {{}} You will also find instructions to install and use any Python package (e.g., PyTorch) in MeVisLab using the `PythonPip` module. @@ -40,19 +40,19 @@ OpenCV includes, among others, algorithms to: * establish markers to overlay with augmented reality ### assimp -The [THE ASSET IMPORTER LIBRARY](http://www.assimp.org/) supports loading and processing geometric scenes from various well known 3D formats. MeVisLab uses assimp to import these files and reuses the scenes directly in MeVisLab. +The [THE ASSET IMPORTER LIBRARY](https://www.assimp.org/) supports loading and processing geometric scenes from various well known 3D formats. MeVisLab uses assimp to import these files and reuses the scenes directly in MeVisLab. A list of supported formats can be found [here](https://assimp-docs.readthedocs.io/en/v5.1.0/about/introduction.html). ### PyTorch \[*not integrated initially*\] -[PyTorch](http://www.pytorch.org) is a machine learning framework based on the Torch library, used for applications such as Computer Vision and Natural Language Processing, originally developed by Meta AI and now part of the Linux Foundation umbrella. +[PyTorch](https://www.pytorch.org) is a machine learning framework based on the Torch library, used for applications such as Computer Vision and Natural Language Processing, originally developed by Meta AI and now part of the Linux Foundation umbrella. The tutorials available here shall provide examples on how to integrate AI into MeVisLab. You can also integrate other Python AI packages the same way. ### Matplotlib [Matplotlib](https://matplotlib.org/) is a library for creating static, animated, and interactive visualizations in Python. -* create publication quality plots +* Create publication-quality plots * Make interactive figures that can be zoomed, panned, and updated * Customize visual style and layout * Export to many file formats diff --git a/mevislab.github.io/content/tutorials/thirdparty/MONAI/monaiexample1.md b/mevislab.github.io/content/tutorials/thirdparty/MONAI/monaiexample1.md index 193c474d3..a62e78e7c 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/MONAI/monaiexample1.md +++ b/mevislab.github.io/content/tutorials/thirdparty/MONAI/monaiexample1.md @@ -26,11 +26,11 @@ As *MONAI* requires *PyTorch*, install it by using the `PythonPip` module as des #### Install MONAI After installing *torch* and *torchvision*, we install *MONAI*. -For installing *MONAI* enter \"*monai*\" into the Command textbox and press *Install*. +For installing *MONAI*, enter \"*monai*\" into the Command textbox and press Install. ![Install MONAI](images/tutorials/thirdparty/monai_example1_1.png "Install MONAI") -After clicking *Install*, the pip console output opens and you can follow the process of the installation. +After clicking Install, the pip console output opens and you can follow the process of the installation. {{}} If you are behind a proxy server, you may have to set the **HTTP_PROXY** and **HTTPS_PROXY** environment variables to the hostname and port of your proxy. These are used by pip when accessing the internet. diff --git a/mevislab.github.io/content/tutorials/thirdparty/MONAI/monaiexample2.md b/mevislab.github.io/content/tutorials/thirdparty/MONAI/monaiexample2.md index 84a4b0b02..e3aeb6157 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/MONAI/monaiexample2.md +++ b/mevislab.github.io/content/tutorials/thirdparty/MONAI/monaiexample2.md @@ -16,7 +16,7 @@ menu: # Example 2: Applying a Spleen Segmentation Model from MONAI in MeVisLab ## Introduction -In the following, we will perform a spleen segmentation using a model from the *MONAI Model Zoo*. The MONAI Model Zoo is a collection of pretrained models for medical imaging, offering standardized bundles for tasks like segmentation, classification, and detection across MRI, CT, and pathology data, all built for easy use and reproducibility within the MONAI framework. Further information and the required files can be found [here](https://github.com/Project-MONAI/model-zoo/tree/dev "here"). +In the following, we will perform a spleen segmentation using a model from the *MONAI Model Zoo*. The MONAI Model Zoo is a collection of pretrained models for medical imaging, offering standardized bundles for tasks like segmentation, classification, and detection across MRI, CT, and pathology data, all built for easy use and reproducibility within the MONAI framework. Further information and the required files can be found [here](https://github.com/Project-MONAI/model-zoo/tree/dev "MONAI Model Zoo"). This example shows how to use the model for **Spleen CT Segmentation** directly in MeVisLab. @@ -24,11 +24,11 @@ This example shows how to use the model for **Spleen CT Segmentation** directly ### Download Necessary Files Create a folder named *spleen_ct_segmentation* somewhere on your system. -Inside this folder, create two subfolders, one named *configs* and another one named *models*, and remember their paths. +Inside this folder, create two subfolders, one named *configs* and another named *models*, and remember their paths. -![Directory Structure](images/tutorials/thirdparty/monai_example2_1.png "Directory Structure"). +![Directory structure](images/tutorials/thirdparty/monai_example2_1.png "Directory structure") -Download all *config* files from [MONAI-Model-Zoo](https://github.com/Project-MONAI/model-zoo/tree/dev/models/spleen_ct_segmentation/configs "MONAI Model-Zoo") and save them in your local *configs* directory. +Download all *config* files from [MONAI Model Zoo](https://github.com/Project-MONAI/model-zoo/tree/dev/models/spleen_ct_segmentation/configs "MONAI Model Zoo") and save them in your local *configs* directory. Download *model* files from [NVIDIA Download Server](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/model_zoo/model_spleen_ct_segmentation_v1.pt "NVIDIA Download Server") and save it in your local *models* directory. @@ -38,7 +38,7 @@ The path to the latest model *.pt* file can be found in [large_files.yml](https: {{}} ### Download Example Images -The recommended CT images used for training the algorithm can be found [here](https://msd-for-monai.s3-us-west-2.amazonaws.com/Task09_Spleen.tar "here"). +The recommended CT images used for training the algorithm can be found [here](https://msd-for-monai.s3-us-west-2.amazonaws.com/Task09_Spleen.tar). ### Create a Macro Module and Add Inputs and Outputs Add a `PythonImage` module and save the network as *MONAISpleenSegmentation.mlab*. @@ -51,7 +51,7 @@ Right-click {{< mousebutton "right" >}} on the group's name and choose *Convert Our new module does not provide any input or output. -![Local Macro Module MONAIDemo](images/tutorials/thirdparty/monai_example2_2.png "Local Macro Module MONAIDemo") +![Local macro module MONAIDemo](images/tutorials/thirdparty/monai_example2_2.png "Local macro module MONAIDemo") Right-click {{< mousebutton "right" >}} on the macro module and select {{< menuitem "Related Files" "MONAIDemo.script">}}. @@ -94,7 +94,7 @@ Right-click {{< mousebutton "right" >}} on the *MONAIDemo.py* and select {{< men ### Create the Network for the Segmentation Right-click {{< mousebutton "right" >}} on the macro module and select {{< menuitem "Related Files" "MONAIDemo.mlab">}}. Create the network seen below. -![MONAIDemo Network](images/tutorials/thirdparty/monai_example2_3a.png "MonaiDemo Network") +![MONAIDemo network](images/tutorials/thirdparty/monai_example2_3a.png "MONAIDemo network") Fields of the internal network can be left with default values; we will change them later. @@ -117,9 +117,9 @@ Interface { If you now open the internal network of your macro module, you can see that the input image is connected to the input of the `Resample3D` module. -![MONAIDemo Internal Network](images/tutorials/thirdparty/monai_example2_3b.png "MonaiDemo Internal Network") +![MONAIDemo internal network: Resample3D connect to the macro's input](images/tutorials/thirdparty/monai_example2_3b.png "MonaiDemo internal network: Resample3D connect to the macro's input") -Again, open the *.script* file and change the internal name of your *outImage* field to reuse the field *Resample3D1.output0*. +Again, open the *.script* file and change the internal name of your outImage field to reuse the field Resample3D1.output0. {{< highlight filename="MONAIDemo.script" >}} ```Stan @@ -133,29 +133,28 @@ Interface { ``` {{}} - If you now open the internal network of your macro module, you can see that the output image is connected to the output of the `Resample3D1` module. -![MONAIDemo Internal Network](images/tutorials/thirdparty/monai_example2_3c.png "MonaiDemo Internal Network") +![MONAIDemo internal network: Resample3D1 connected to the macro's output](images/tutorials/thirdparty/monai_example2_3c.png "MONAIDemo internal network: Resample3D1 connected to the macro's output") ### Adapt Input Image to *MONAI* Parameters from Training The model has been trained for strictly defined assumptions for the input image. All values can normally be found in the *inference.json* file in your *configs* directory. -Use the `itkImageFileReader` module to load the file *Task09_Spleen/Task09_SpleenimagesTr/spleen_7.nii.gz* from dowloaded example patients. The *Output Inspector* shows the image and additional information about the size. +Use the `itkImageFileReader` module to load the file *Task09_Spleen/Task09_SpleenimagesTr/spleen_7.nii.gz* from dowloaded example patients. The Output Inspector shows the image and additional information about the size. -We can see that the image size is 512 x 512 x 114 and the voxel size is 0.9766 x 0.9766 x 2.5. +We can see that the image size is *512 x 512 x 114* and the voxel size is *0.9766 x 0.9766 x 2.5*. ![Output Inspector](images/tutorials/thirdparty/monai_example2_3d.png "Output Inspector") -Connect the module to your local macro module `MonaiDemo`. The result of the segmentation shall be visualized as a semitransparent overlay on your original image. +Connect the module to your local macro module `MONAIDemo`. The result of the segmentation shall be visualized as a semitransparent overlay on your original image. -Add a `SoView2DOverlay` and a `View2D` module and connect them to your local macro module `MonaiDemo`. +Add a `SoView2DOverlay` and a `View2D` module and connect them to your local macro module `MONAIDemo`. ![Final network](images/tutorials/thirdparty/monai_example2_4.png "Final network") -The **Spleen CT Segmentation** network expects images having a defined voxel size of 1.5 x 1.5 x 2. We want to define these values via fields in the Module Inspector. +The **Spleen CT Segmentation** network expects images having a defined voxel size of *1.5 x 1.5 x 2*. We want to define these values via fields in the Module Inspector. -Open the *.script* file and add the fields start and voxelSize to your local macro module `MonaiDemo`: +Open the *.script* file and add the fields start and voxelSize to your local macro module `MONAIDemo`: {{< highlight filename="MONAIDemo.script" >}} ```Stan @@ -170,17 +169,15 @@ Interface { ``` {{}} -If you reload your module now, we can set the voxel size to use for the segmentation directly in our macro module `MonaiDemo`. Additionally, we can trigger a start function for running the segmentation. This is implemented later. - -![Voxel Size](images/tutorials/thirdparty/monai_example2_4a.png "Voxel Size") +If you reload your module now, we can set the voxel size to use for the segmentation directly in our macro module `MONAIDemo`. Additionally, we can trigger a start function for running the segmentation. This is implemented later. -If you select the output field of the `Resample3D` module in the internal network, you can see the extent of the currently opened image after changing the voxel size to 1.5 x 1.5 x 2. It shows 333 x 333 x 143. +![Voxel size](images/tutorials/thirdparty/monai_example2_4a.png "Voxel size") -![Original Image Size](images/tutorials/thirdparty/monai_example2_5.png "Original Image Size") +If you select the output field of the `Resample3D` module in the internal network, you can see the extent of the currently opened image after changing the voxel size to *1.5 x 1.5 x 2*. It shows *333 x 333 x 143*. -The algorithm expects image sizes of 160 x 160 x 160. We add this expected size of the image to our macro module in the same way. +The algorithm expects image sizes of *160 x 160 x 160*. We add this expected size of the image to our macro module in the same way. -Open the *.script* file and add the following fields to your local macro module `MonaiDemo`: +Open the *.script* file and add the following fields to your local macro module `MONAIDemo`: {{< highlight filename="MONAIDemo.script" >}} ```Stan @@ -205,7 +202,7 @@ Reload your macro module and enter the following values for your new fields: Next, we change the gray values of the image, because the algorithm has been trained on values between -57 and 164. Again, the values can be found in the *inference.json* file in your *configs* directory. -Open the *.script* file and add the following fields to your local macro module `MonaiDemo`: +Open the *.script* file and add the following fields to your local macro module `MONAIDemo`: {{< highlight filename="MONAIDemo.script" >}} ```Stan @@ -228,7 +225,7 @@ As already done before, we can now defined the threshold values for our module v As defined in the *inference.json* file in your *configs* directory, the gray values in the image must be between 0 and 1. -Open the *.script* file and add the following fields to your local macro module `MonaiDemo`: +Open the *.script* file and add the following fields to your local macro module `MONAIDemo`: {{< highlight filename="MONAIDemo.script" >}} ```Stan @@ -251,16 +248,16 @@ Set the following: The algorithm expects NumPy images. NumPy uses the order Z, Y, X, other than MeVisLab. We are using X, Y, Z. The image needs to be transformed. -Open the panel of the `SwapFlipDimensions` module and select X as *Axis 1* and Z as *Axis 2*. +Open the panel of the `SwapFlipDimensions` module and select *X* as Axis 1 and *Z* as Axis 2. ![SwapFlipDimensions](images/tutorials/thirdparty/monai_example2_11.png "SwapFlipDimensions") -After the algorithm has finished, we have to flip the images back to the original order. Open the panel of the `SwapFlipDimensions1` module and select X as *Axis 1* and Z as *Axis 2*. +After the algorithm has finished, we have to flip the images back to the original order. Open the panel of the `SwapFlipDimensions1` module and select *X* as Axis 1 and *Z* as Axis 2. Finally, we want to show the results of the algorithm as a semitransparent overlay on the image. Open the panel of the `View2DOverlay` and define the following settings: -* Blend Mode: Blend -* Alpha Factor: 0.5 -* Base Color: red +* Blend Mode = *Blend* +* Alpha Factor = *0.5* +* Base Color = *red* ![View2DOverlay](images/tutorials/thirdparty/monai_example2_12.png "View2DOverlay") @@ -297,12 +294,12 @@ Commands { ``` {{}} -If the user touches the trigger start, a Python function *onStart* will be executed. Whenever the size of our image is changed, we call a function called *_sizeChanged* and if the input image changes, we want to reset the module to its default values. +If the user touches the trigger start, a Python function onStart will be executed. Whenever the size of our image is changed, we call a function called _sizeChanged and if the input image changes, we want to reset the module to its default values. ### Python Scripting The next step is to write our Python code. -Right-click {{< mousebutton "right" >}} *MONAIDemo.py* in *Commands* section line *source*. MATE opens showing the *.py* file of our module. +Right-click {{< mousebutton "right" >}} *MONAIDemo.py* in the *Commands* section line *source*. MATE opens showing the *.py* file of our module. Insert the following code: @@ -336,9 +333,9 @@ def _sizeChanged(): ``` {{}} -These functions should be enough to run the module. You can try them by changing the input image of our module, by changing any of the size values in *Module Inspector*, or by clicking *start*. +These functions should be enough to run the module. You can try them by changing the input image of our module, by changing any of the size values in Module Inspector, or by clicking *start*. -Let's implement the *_getImage* function first: +Let's implement the _getImage function first: {{< highlight filename="MONAIDemo.py" >}} ```Python @@ -411,7 +408,7 @@ We want to use the image that has been modified according to our pretrained netw ``` {{}} -This function now already calculates the segmentation using the *MONAI* model. The problem is that it may happen that our subimage with the size 160 x 160 x 160 is located somewhere in our original image where no spleen is visible. +This function now already calculates the segmentation using the *MONAI* model. The problem is that it may happen that our subimage with the size *160 x 160 x 160* is located somewhere in our original image where no spleen is visible. We have to calculate a bounding box in our `ROISelect` module and need to be able to move this bounding box to the correct location. @@ -502,4 +499,3 @@ You can also use the other examples from *MONAI Model Zoo* the same way, just ma * The general principles are always the same for all models. {{< networkfile "examples/thirdparty/monai/MONAIDemo.zip" >}} - diff --git a/mevislab.github.io/content/tutorials/thirdparty/assimp.md b/mevislab.github.io/content/tutorials/thirdparty/assimp.md index 20b30378e..703887299 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/assimp.md +++ b/mevislab.github.io/content/tutorials/thirdparty/assimp.md @@ -16,7 +16,7 @@ menu: # Asset-Importer-Lib (assimp) {#assimp} ## Introduction -[Assimp](http://www.assimp.org "assimp") (Asset-Importer-Lib) is a library to load and process geometric scenes from various 3D data formats. +[Assimp](https://www.assimp.org "assimp") (Asset-Importer-Lib) is a library to load and process geometric scenes from various 3D data formats. This chapter provides some examples of how 3D formats can be imported into MeVisLab. In general, you always need a `SoSceneLoader` module. The `SoSceneLoader` allows to load meshes as Open Inventor points/lines/triangles/faces using the Open Asset Import Library. diff --git a/mevislab.github.io/content/tutorials/thirdparty/assimp/assimpexample1.md b/mevislab.github.io/content/tutorials/thirdparty/assimp/assimpexample1.md index 097fe914c..b88844a13 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/assimp/assimpexample1.md +++ b/mevislab.github.io/content/tutorials/thirdparty/assimp/assimpexample1.md @@ -25,7 +25,7 @@ This example uses the *assimp* library to load a 3D file and save the file as *. ### Develop Your Network Add the modules `SoSceneLoader`, `SoBackground`, and `SoExaminerViewer` to your workspace and connect them as seen below. -![Example Network](images/tutorials/thirdparty/assimp_example1.png "Example Network") +![Example network](images/tutorials/thirdparty/assimp_example1.png "Example network") ### Open the 3D File Select the file *vtkCow.obj* from MeVisLab demo data directory. Open `SoExaminerViewer` and inspect the scene. You will see a 3D cow. @@ -38,11 +38,11 @@ In the case you cannot see the cow, it might be located outside your current cam Add a `SoSphere` to the workspace and connect it to your viewer. Define the *Radius* of your sphere to 2 and inspect your viewer. -![Cow and Sphere in SoExaminerViewer](images/tutorials/thirdparty/CowAndSphere.png "Cow and Sphere in SoExaminerViewer") +![Cow and sphere in SoExaminerViewer](images/tutorials/thirdparty/CowAndSphere.png "Cow and sphere in SoExaminerViewer") You can also define a material for your sphere but what we wanted to show is: You can use the loaded 3D files in MeVisLab Open Inventor scenes. -![Cow and red Sphere in SoExaminerViewer](images/tutorials/thirdparty/CowAndSphere_red.png "Cow and red Sphere in SoExaminerViewer") +![Cow and red sphere in SoExaminerViewer](images/tutorials/thirdparty/CowAndSphere_red.png "Cow and red sphere in SoExaminerViewer") ### Save Your Scene as *.stl* File for 3D Printing Add a `SoSceneWriter` module to your workspace. The `SoExaminerViewer` has a hidden output that can be shown on pressing {{}}. Connect the `SoSceneWriter` to the output. @@ -52,14 +52,14 @@ Name your output *.stl* file and select *Stl Ascii* as output format, so that we ![SoSceneWriter](images/tutorials/thirdparty/SoSceneWriter.png "SoSceneWriter") {{}} -The `SoSceneWriter` can save node color information when saving in Open Inventor (ASCII or binary) or in VRML format. The `SoSceneWriter` needs to be attached to a `SoWEMRenderer` that renders in *ColorMode:NodeColor*. +The `SoSceneWriter` can save node color information when saving in Open Inventor (ASCII or binary) or in a simple VRML format. The `SoSceneWriter` needs to be attached to a `SoWEMRenderer` that renders in ColorMode *NodeColor*. There are [tools](https://www.patrickmin.com/meshconv/) to convert from at least VRML to STL available for free. {{}} -Write your scene and open the resulting file in your preferred editor. As an alternative, you can also open the file in an *.stl* file reader like Microsoft 3D Viewer. +Write your scene and open the resulting file in your preferred editor. As an alternative, you can also open the file in an *.stl* file reader like [Microsoft 3D Viewer](https://apps.microsoft.com/detail/9nblggh42ths?hl=en-ca&gl=CA). -![Microsoft 3D-Viewer](images/tutorials/thirdparty/Microsoft_3D_Viewer.png "Microsoft 3D-Viewer") +![Microsoft 3D viewer](images/tutorials/thirdparty/Microsoft_3D_Viewer.png "Microsoft 3D viewer") ### Load the File Again For loading your *.stl* file, you can use a `SoSceneLoader` and a `SoExaminerViewer`. diff --git a/mevislab.github.io/content/tutorials/thirdparty/matplotlib.md b/mevislab.github.io/content/tutorials/thirdparty/matplotlib.md index 7b236647a..74fcdddd0 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/matplotlib.md +++ b/mevislab.github.io/content/tutorials/thirdparty/matplotlib.md @@ -17,12 +17,12 @@ menu: Matplotlib, introduced by John Hunter in 2002 and initially released in 2003, is a comprehensive data visualization library in Python. It is widely used in the scientific world as it is easy to grasp for beginners and provides high quality plots and images that are widely customizable. {{}} -The documentation on Matplotlib along with general examples, cheat sheets, and a starting guide can be found [here](https://matplotlib.org/). +The documentation on Matplotlib, along with general examples, cheat sheets, and a starting guide can be found [here](https://matplotlib.org/). {{}} As MeVisLab supports the integration of Python scripts, e.g., for test automation, Matplotlib can be used to visualize any data you might want to see. And as it is directly integrated into MeVisLab, you don't have to install it (via `PythonPip` module) first. -In the following tutorial pages on Matplotlib, you will be shown how to create a module in MeVisLab that helps you plot greyscale distributions of single slices or defined sequences of slices of a DICOM image and layer the grayscale distributions of two chosen slices for comparison. +In the following tutorial pages on Matplotlib, you will be shown how to create a module in MeVisLab that helps you plot grayscale distributions of single slices or defined sequences of slices of a DICOM image and layer the grayscale distributions of two chosen slices for comparison. * The module that is adapted during the tutorials is set up in the [Example 1: Module Setup](tutorials/thirdparty/matplotlib/modulesetup) tutorial. * The panel and two-dimensional plotting functionality is added in [Example 2: 2D Plotting](tutorials/thirdparty/matplotlib/2dplotting). @@ -31,4 +31,4 @@ In the following tutorial pages on Matplotlib, you will be shown how to create a {{}} Notice that for the Matplotlib tutorials, the previous tutorial always works as a foundation for the following one. -{{}} \ No newline at end of file +{{}} diff --git a/mevislab.github.io/content/tutorials/thirdparty/matplotlib/2Dplotting.md b/mevislab.github.io/content/tutorials/thirdparty/matplotlib/2Dplotting.md index 739a0db36..31e70f22a 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/matplotlib/2Dplotting.md +++ b/mevislab.github.io/content/tutorials/thirdparty/matplotlib/2Dplotting.md @@ -21,11 +21,11 @@ In this tutorial, we will equip the macro module we created in the [previous tut ## Steps to Do Open the module definition folder of your macro module and the related *.script* file in MATE. Then, activate the preview as shown below: -![MATE Preview](images/tutorials/thirdparty/Matplotlib7.png) +![MATE's Preview view](images/tutorials/thirdparty/Matplotlib7.png "MATE's Preview view") -Drag the small preview window to the bottom right corner of your window where it does not bother you. We will now be adding contents to be displayed there. +Drag the small preview window to the bottom right corner of your window where it does not bother you. We will now add contents to be displayed there. -Adding the following code to your *.script* file will open a panel window if the macro module is clicked. +Adding the following code to your *.script* file will open a panel window if the macro module is double-clicked {{< mousebutton "left" >}}. This new panel window contains a Matplotlib canvas where the plots will be displayed later on as well as two prepared boxes that we will add functions to in the next step. {{< highlight filename = "BaseNetwork.script">}} @@ -65,25 +65,25 @@ Window { Letting a box expand on the x- or y-axis or adding an empty object do so contributes to the panel looking a certain way and helps the positioning of the elements. You can also try to vary the positioning by adding or removing "expand" statements or moving boxes from a vertical to a horizontal alignment. Hover over the boxes in the preview to explore the concept. {{}} -You can click and hold onto a box to move it within the preview. Your code will automatically be changed according to the new positioning. +You can click {{< mousebutton "left" >}} and hold onto a box to move it within the preview. Your code will automatically be changed according to the new positioning. {{}} **Now, we need to identify which module parameters we want to be able to access from the panel of our macro:** To plot a slice or a defined sequence of slices, we need to be able to set a start and an end. -Go back into your MeVisLab workspace, right-click your `BaseNetwork` module and choose "Show Internal Network". +Go back into your MeVisLab workspace, right-click {{< mousebutton "right" >}} your `BaseNetwork` module and choose "Show Internal Network". -![SubImage module info](images/tutorials/thirdparty/Matplotlib8.png "The `SubImage` module provides the option to set sequences of slices.") -![SubImage module panel](images/tutorials/thirdparty/Matplotlib9.PNG "The starting and ending slices of the sequence can be set in the module panel.") +![The SubImage module provides the option to set sequences of slices](images/tutorials/thirdparty/Matplotlib8.png "The SubImage module provides the option to set sequences of slices") +![The starting and ending slices of the sequence can be set in the module panel](images/tutorials/thirdparty/Matplotlib9.PNG "The starting and ending slices of the sequence can be set in the module panel") {{}} -To find out what the parameters are called, what type of values they contain and receive, and what they refer to, you can right-click on them within the panel. +To find out what the parameters are called, what type of values they contain and receive, and what they refer to, you can right-click {{< mousebutton "right" >}} on them within the panel. {{}} -We now know that we will need `SubImage.z` and `SubImage.sz` to define the start and end of a sequence. +We now know that we will need SubImage.z and SubImage.sz to define the start and end of a sequence. But there are a few other module parameters that must be set beforehand to make sure the data we extract to plot later is compareable and correct. -To do so, we will be defining a "setDefaults" function for our module. Open the *.py* file and add the code below. +To do so, we will be defining a setDefaults function for our module. Open the *.py* file and add the code below. {{< highlight filename = "BaseNetwork.py">}} ```Python @@ -118,7 +118,7 @@ def updateSlices(): ``` {{}} -Make sure that the variable declarations as "None" are put above the "setDefaults" function and add the execution of the "updateSlices()" function into the "setDefaults" function, like so: +Make sure that the variable definitions as None are put above the setDefaults function and add the execution of the updateSlices function into the setDefaults function, like so: {{< highlight filename = "BaseNetwork.py">}} ```Python @@ -137,7 +137,7 @@ def setDefaults(): ``` {{}} -Now we are ensuring that the "setDefaults" function and therefore also the "updateSlices" function are executed every time the panel is opened by setting "setDefaults" as a wakeup command. +Now we are ensuring that the setDefaults function and therefore also the updateSlices function are executed every time the panel is opened by setting setDefaults as a wakeup command. {{< highlight filename = "BaseNetwork.script">}} ```Stan @@ -169,7 +169,7 @@ Commands { ``` {{}} To see if all of this is working, we need to embed fields into our panel. -Put this inside of the box titled "Single Slice": +Put this inside of the box titled *Single Slice*: {{< highlight filename = "BaseNetwork.script">}} ```Stan @@ -188,7 +188,7 @@ Put this inside of the box titled "Single Slice": ``` {{}} -And then add this to your box titled "Sequence": +And then add this to your box titled *Sequence*: {{< highlight filename = "BaseNetwork.script">}} ```Stan @@ -219,11 +219,11 @@ Lastly, put this under your two boxes, but above the empty element in the vertic {{}} If you followed all of the listed steps, your panel preview should look like this and display all the current parameter values. -![Adapted macro panel](images/tutorials/thirdparty/Matplotlib10.PNG) +![Adapted macro panel](images/tutorials/thirdparty/Matplotlib10.PNG "Adapted macro panel") We can now work on the functions that visualize the data as plots on the Matplotlib canvas. You will have noticed how all of the buttons in the *.script* file have a command. Whenever that button is clicked, its designated command is executed. -However, for any of the functions referenced via "command" to work, we need one that ensures that the plots are shown on the integrated Matplotlib canvas. We will start with that one. +However, for any of the functions referenced via *command* to work, we need one that ensures that the plots are shown on the integrated Matplotlib canvas. We will start with that one. {{< highlight filename = "BaseNetwork.py">}} ```Python @@ -302,18 +302,18 @@ def click2D(): You should now be able to reproduce results like these: -![Single Slice 2D](images/tutorials/thirdparty/Matplotlib13.PNG "2D plot of slice 28") -![Small Sequence 2D](images/tutorials/thirdparty/Matplotlib112.PNG "Smaller sequences are displayed as multiple single slice plots.") +![2D plot of slice 28](images/tutorials/thirdparty/Matplotlib13.PNG "2D plot of slice 28") +![Smaller sequences are displayed as multiple single slice plots](images/tutorials/thirdparty/Matplotlib112.PNG "Smaller sequences are displayed as multiple single slice plots") ![Sequence in 2D](images/tutorials/thirdparty/Matplotlib122.PNG "Sequence in 2D") {{}} Notice how the bin size affects the plots appearance. {{}} -You can download the .py file below if you want. +You can download the *.py* file below if you want. {{< networkfile "/tutorials/thirdparty/matplotlib/BaseNetwork.py" >}} ## Summary -* Functions are connected to fields of the panel via commands. +* Functions are connected to fields of the panel via *command*s. * The panel preview in MATE can be used to change positioning of panel components without touching the code. -* An "expand" statement can help the positioning of components in the panel. +* An *expand* statement can help the positioning of components in the panel. diff --git a/mevislab.github.io/content/tutorials/thirdparty/matplotlib/3Dplotting.md b/mevislab.github.io/content/tutorials/thirdparty/matplotlib/3Dplotting.md index ef487cac9..74d330416 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/matplotlib/3Dplotting.md +++ b/mevislab.github.io/content/tutorials/thirdparty/matplotlib/3Dplotting.md @@ -56,8 +56,8 @@ After saving, you should be able to reproduce results like these: You cannot zoom into 3D plots on a Matplotlib canvas. Try changing the viewing angle instead. {{}} -![Single Slice 3D](images/tutorials/thirdparty/Matplotlib27.PNG) -![Single Slice 3D](images/tutorials/thirdparty/Matplotlib29.PNG) +![Show a single slice in 3D](images/tutorials/thirdparty/Matplotlib27.PNG "Show a single slice in 3D") +![Compare two slices in 3D](images/tutorials/thirdparty/Matplotlib29.PNG "Compare two slices in 3D") You can download the *.py* file below if you want. {{< networkfile "/tutorials/thirdparty/matplotlib/BaseNetwork3D.py" >}} diff --git a/mevislab.github.io/content/tutorials/thirdparty/matplotlib/modulesetup.md b/mevislab.github.io/content/tutorials/thirdparty/matplotlib/modulesetup.md index 5b2afce90..ff0433859 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/matplotlib/modulesetup.md +++ b/mevislab.github.io/content/tutorials/thirdparty/matplotlib/modulesetup.md @@ -20,23 +20,23 @@ To be able to access the data needed for our grayscale distribution plots, we ne ## Steps to Do Open up your MeVisLab workspace and add the modules `LocalImage`, `SubImage`, and `Histogram` to it. -Connect the output of `LocalImage` to the input of `SubImage`, and the output of `SubImage` with the input of `Histogram`. +Connect the output of `LocalImage` to the input of `SubImage`, and the output of `SubImage` to the input of `Histogram`. If you feel like using a shortcut, you can also download the base network below and open it in your MeVisLab. Your finished network should look like this: -![Base network](images/tutorials/thirdparty/Matplotlib1.PNG) +![Base network](images/tutorials/thirdparty/Matplotlib1.PNG "Base network") {{< networkfile "/tutorials/thirdparty/matplotlib/MatplotlibBaseNetwork.mlab" >}} ### Excursion on the Concept Behind Modules To be able to build on the foundation we just set, it can be useful to understand how modules are conceptualized: -You will have noticed how for every module, a panel will pop up if you double-click it. The modules panel contains all of its functional parameters and enables you, as the user, to change them within a graphical user interface (GUI). We will do something similar later on. +You will have noticed how for every module, a panel will pop up if you double-click {{< mousebutton "left" >}} it. The module panel contains all of its functional parameters and enables you, as the user, to change them within a graphical user interface (GUI). We will do something similar later on. -But where and how is a module panel created? To answer this question, please close the module panel and right-click on the module. -A context menu will open, click on "Related Files". +But where and how is a module panel created? To answer this question, please close the module panel and right-click {{< mousebutton "right" >}} the module. +A context menu will open, select *Related Files*. -![Context menu of the "SubImage" module](images/tutorials/thirdparty/Matplotlib2.png) +![Context menu of the SubImage module](images/tutorials/thirdparty/Matplotlib2.png "Context menu of the SubImage module") As you can see, each module has a *.script* and a *.py* file named like the module itself: * The *.script* file is where the appearance and structure of the module panel as well as their commands are declared. @@ -45,14 +45,16 @@ As you can see, each module has a *.script* and a *.py* file named like the modu Some modules also reference an *.mlab* file, which usually contains their internal network as the module is a macro. **Let's continue with our module setup now:** -If your network is ready, group it by right-clicking on your group's title and select "Grouping", then "Add To A New Group". +If your network is ready, group it by selecting all module and right-clicking {{< mousebutton "right" >}} and selecting "Grouping", then "Add To New Group". + Afterward, convert your grouped network into a macro module. -![Converting to a macro](images/tutorials/thirdparty/Matplotlib3.png) +![Converting module group to a local macro](images/tutorials/thirdparty/Matplotlib3.png "Converting module group to a local macro") {{}} -Information on how to convert groups into macros can be found [here](tutorials/basicmechanisms#TutorialMacroModules). +Information on how to convert a module group into a local macro can be found [here](tutorials/basicmechanisms#TutorialMacroModules). {{}} -Depending on whether you like to reuse your projects in other workspaces, it can make sense to convert them. +Depending on whether you like to reuse your projects in other workspaces, it can make sense to convert them to a global macro. + We'd recommend to do so. Now open the *.script* file of your newly created macro through the context menu. The file will be opened within MATE (MeVisLab Advanced Text Editor). Add this short piece of code into your *.script* file and make sure that the *.script* and the *.py* are named exactly the same as the module they are created for. @@ -66,14 +68,14 @@ Now open the *.script* file of your newly created macro through the context menu {{}} -Click the "Reload" button, which is located above the script for the *.py* file to be added into the module definition folder, then open it using the "Files" button on the same bar as demonstrated below: -![MATE](images/tutorials/thirdparty/Matplotlib5.png) +Click the *Reload* button, which is located above the script for the *.py* file to be added into the module definition folder, then open it using the "Files" button on the same bar as demonstrated below: +![Global macro's files in MATE](images/tutorials/thirdparty/Matplotlib5.png "Global macro's files in MATE") {{}} -The [MDL Reference](https://mevislabdownloads.mevis.de/docs/current/MeVisLab/Resources/Documentation/Publish/SDK/MDLReference/index.html) is a very handy tool for this and certainly also for following projects. +The [MDL Reference](https://mevislabdownloads.mevis.de/docs/current/MeVisLab/Resources/Documentation/Publish/SDK/MDLReference/index.html) contains useful information for this and certainly also for following projects. {{}} -You have now created your own module and enabled the *.script* file (hence the GUI or panel later on) to access functions and methods written in the *.py* file. +You have now created your own module and enabled the *.script* file (where the GUI or panel can be described later on) to access functions and methods written in the *.py* file. ## Summary * Modules are defined by the contents within their definition folder. diff --git a/mevislab.github.io/content/tutorials/thirdparty/matplotlib/slicecomparison.md b/mevislab.github.io/content/tutorials/thirdparty/matplotlib/slicecomparison.md index 9cedc70aa..a83104d82 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/matplotlib/slicecomparison.md +++ b/mevislab.github.io/content/tutorials/thirdparty/matplotlib/slicecomparison.md @@ -18,13 +18,12 @@ menu: ## Introduction We will adapt the previously created macro module to be able to overlay two defined slices to compare their grayscale distributions. * The module we are adapting has been set up in the [Example 1: Module Setup](tutorials/thirdparty/matplotlib/modulesetup) tutorial. -* The panel and two-dimensional plotting functionality has been added in [Example 2: 2D Plotting] - (tutorials/thirdparty/matplotlib/2dplotting). +* The panel and two-dimensional plotting functionality has been added in [Example 2: 2D Plotting](tutorials/thirdparty/matplotlib/2dplotting). ## Steps to Do -At first, we will extend the panel: Open your `BaseNetwork` macro module within an empty MeVisLab workspace and select the *.script* file from its related files. +As a first step, we will extend the panel: Open your `BaseNetwork` macro module within an empty MeVisLab workspace and select the *.script* file from its related files. -Add the following code into your *.script* file between the "Single Slice" and the "Sequence" box. +Add the following code into your *.script* file between the *Single Slice* and the *Sequence* box. {{< highlight filename = "BaseNetwork.script">}} ```Stan @@ -46,9 +45,9 @@ Add the following code into your *.script* file between the "Single Slice" and t Your panel should now be changed to look like this: -![MATE Preview](images/tutorials/thirdparty/Matplotlib14.PNG) +![Window preview](images/tutorials/thirdparty/Matplotlib14.PNG "Window preview") -We will now add the "comparison" function, to give the "Plot" button in our "Comparison" box a purpose. To do so, switch to your module's *.py* file and choose a cosy place for the following piece of code: +We will now add the comparison function, to give the *Plot* button in our *Comparison* box a purpose. To do so, switch to your module's *.py* file and choose a cosy place for the following piece of code: {{< highlight filename = "BaseNetwork.py">}} ```Python @@ -82,9 +81,8 @@ def comparison(): You should now be able to reproduce results like these: -![Comparison](images/tutorials/thirdparty/Matplotlib16.PNG) -![Comparison](images/tutorials/thirdparty/Matplotlib17.PNG) +![Compare slices 25 and 40](images/tutorials/thirdparty/Matplotlib16.PNG "Compare slices 25 and 40") +![Compare slices 40 and 60](images/tutorials/thirdparty/Matplotlib17.PNG "Compare slices 40 and 60") ## Summary * Grayscale distributions of two slices can be layered to compare them and make deviations noticeable. - diff --git a/mevislab.github.io/content/tutorials/thirdparty/monai.md b/mevislab.github.io/content/tutorials/thirdparty/monai.md index e655ff3e9..09c60484f 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/monai.md +++ b/mevislab.github.io/content/tutorials/thirdparty/monai.md @@ -16,12 +16,12 @@ menu: # MONAI {#monai} ## Introduction -[MONAI](https://github.com/Project-MONAI "monai") (**M**edical **O**pen **N**etwork for **AI**) is an open-source framework built on [PyTorch](http://www.pytorch.org "pytorch") designed for developing and deploying AI models in medical imaging. +[MONAI](https://github.com/Project-MONAI "MONAI") (**M**edical **O**pen **N**etwork for **AI**) is an open-source framework built on [PyTorch](https://www.pytorch.org "PyTorch") designed for developing and deploying AI models in medical imaging. Created by [NVIDIA](https://docs.nvidia.com/monai/index.html "NVIDIA") and the Linux Foundation, it provides specialized tools for handling medical data formats like DICOM and NIfTI, along with advanced preprocessing, augmentation, and 3D image analysis capabilities. MONAI includes ready-to-use deep learning models (such as UNet and SegResNet) and utilities for segmentation, classification, and image registration. It supports distributed GPU training and ensures reproducible research workflows. ## Available Tutorials -* [Example 1: Install MONAI using PythonPip module](tutorials/thirdparty/monai/monaiexample1/) -* [Example 2: Applying a spleen segmentation model from MONAI in MeVisLab](tutorials/thirdparty/monai/monaiexample2/) +* [Example 1: Install MONAI Using PythonPip Module](tutorials/thirdparty/monai/monaiexample1/) +* [Example 2: Applying a Spleen Segmentation Model from MONAI in MeVisLab](tutorials/thirdparty/monai/monaiexample2/) diff --git a/mevislab.github.io/content/tutorials/thirdparty/opencv.md b/mevislab.github.io/content/tutorials/thirdparty/opencv.md index 1d5397ccf..a11a71925 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/opencv.md +++ b/mevislab.github.io/content/tutorials/thirdparty/opencv.md @@ -21,4 +21,4 @@ menu: This chapter provides some examples how to use OpenCV in MeVisLab. ## Other Resources -You can find a lot of OpenCV examples and tutorials on their [website](https://docs.opencv.org/4.x/d9/df8/tutorial_root.html). \ No newline at end of file +You can find a lot of OpenCV examples and tutorials on their [website](https://docs.opencv.org/4.x/d9/df8/tutorial_root.html). diff --git a/mevislab.github.io/content/tutorials/thirdparty/opencv/thirdpartyexample1.md b/mevislab.github.io/content/tutorials/thirdparty/opencv/thirdpartyexample1.md index 9c80ad8e1..eb10ee145 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/opencv/thirdpartyexample1.md +++ b/mevislab.github.io/content/tutorials/thirdparty/opencv/thirdpartyexample1.md @@ -8,7 +8,7 @@ tags: ["Advanced", "Tutorial", "OpenCV", "Python", "Webcam", "Macro", "Macro mod menu: main: identifier: "thirdpartyexample1" - title: "Access Your Webcam and Use the Live Video in MeVisLab Via OpenCV." + title: "Access Your Webcam and Use the Live Video in MeVisLab via OpenCV." weight: 855 parent: "opencv" --- @@ -23,7 +23,7 @@ In this example, we are using the `PythonImage` module and access your webcam to ### Creating the Network to be Used for Testing Add the modules to your workspace and connect them as seen below. -![Example Network ](images/tutorials/thirdparty/network_example1.png "Example Network ") +![Example network](images/tutorials/thirdparty/network_example1.png "Example network") The viewer is empty because the image needs to be set via Python scripting. @@ -32,14 +32,14 @@ More information about the `PythonImage` module can be found {{< docuLinks "/Sta {{}} ### Create a Macro Module -Now you need to create a macro module from your network. You can either group your modules, create a local macro, and convert it to a global macro module, or you use the Project Wizard and load your *.mlab* file. +Now you need to create a macro module from your network. You can either group your modules, create a local macro and convert it to a global macro module, or you use the Project Wizard and load your *.mlab* file. {{}} -A tutorial on how to create your own macro modules can be found in [Example 2.2: Global macro modules](tutorials/basicmechanisms/macromodules/globalmacromodules "Example 2.2: Global macro modules"). Make sure to add a Python file to your macro module. +A tutorial on how to create your own macro modules can be found in [Example 2.2: Global Macro Modules](tutorials/basicmechanisms/macromodules/globalmacromodules "Example 2.2: Global Macro Modules"). Make sure to add a Python file to your macro module. {{}} ### Add the View2D to Your UI -Next, we need to add the `View2D` to a Window of your macro module. Right-click on your module {{< mousebutton "right" >}}, open the context menu and select {{< menuitem "Related Files" ".script" >}}. The text editor {{< docuLinks "/Resources/Documentation/Publish/SDK/MeVisLabManual/ch26.html" "MATE">}} opens. You can see the *.script* file of your module. +Next, we need to add the `View2D` to a window of your macro module. Right-click on your module {{< mousebutton "right" >}}, open the context menu and select {{< menuitem "Related Files" ".script" >}}. The text editor {{< docuLinks "/Resources/Documentation/Publish/SDK/MeVisLabManual/ch26.html" "MATE">}} opens. You can see the *.script* file of your module. Add the following to your file: {{< highlight filename=".script" >}} @@ -142,9 +142,9 @@ def stopCapture(): ``` {{}} -We now imported *cv2* and *OpenCVUtils*, so that we can use them in Python. Additionally, we defined a list of *_interfaces* and a *camera*. The import of *mevis* is not necessary for this example. +We now imported *cv2* and *OpenCVUtils*, so that we can use them in Python. Additionally, we defined a list of _interfaces and a camera. The import of *mevis* is not necessary for this example. -The *setupInterfaces* function is called whenever the *Window* of your module is opened. Here we are getting the interface of the `PythonImage` module and append it to our global list. +The setupInterfaces function is called whenever the *Window* of your module is opened. Here we are getting the interface of the `PythonImage` module and append it to our global list. ### Accessing the Webcam Now we want to start capturing the camera. @@ -168,11 +168,11 @@ def updateImage(image): ``` {{}} -The *startCapture* function gets the camera from the *cv2* object if not already available. Then, it calls the current MeVisLab network context and creates a timer that calls a *grabImage* function every 0.1 seconds. +The startCapture function gets the camera from the *cv2* object if not already available. Then, it calls the current MeVisLab network context and creates a timer that calls a grabImage function every 0.1 seconds. -The *grabImage* function reads an image from the *camera* and calls *updateImage*. The interface from the `PythonImage` module is used to set the image from the webcam. The MeVisLab *OpenCVUtils* converts the OpenCV image to the MeVisLab image format *MLImage*. +The grabImage function reads an image from the camera and calls updateImage. The interface from the `PythonImage` module is used to set the image from the webcam. The MeVisLab *OpenCVUtils* converts the OpenCV image to the MeVisLab image format *MLImage*. -Next, we define what happens if you click the *Pause* button. +Next, we define what happens if you click the Pause button. {{< highlight filename=".py" >}} ```Python ... @@ -185,7 +185,7 @@ def stopCapture(): As we started a timer in our network context that updates the image every 0.1 seconds, we just stop this timer and the camera is paused. -In the end, we need to release the camera whenever you close the Window of your macro module. +In the end, we need to release the camera whenever you close the window of your macro module. {{< highlight filename=".py" >}} ```Python ... diff --git a/mevislab.github.io/content/tutorials/thirdparty/opencv/thirdpartyexample2.md b/mevislab.github.io/content/tutorials/thirdparty/opencv/thirdpartyexample2.md index 9e988fd34..1444494ac 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/opencv/thirdpartyexample2.md +++ b/mevislab.github.io/content/tutorials/thirdparty/opencv/thirdpartyexample2.md @@ -75,9 +75,9 @@ def releaseCamera(_): ``` {{}} -Opening your macro module and pressing *Start* should now open your webcam stream and an additional OpenCV window, which shows a blue rectangle around a detected face. +Opening your macro module and pressing Start should now open your webcam stream and an additional OpenCV window, which shows a blue rectangle around a detected face. -![Face Detection in MeVisLab using OpenCV](images/tutorials/thirdparty/bigbang.png "Face Detection in MeVisLab using OpenCV") +![Face detection in MeVisLab using OpenCV](images/tutorials/thirdparty/bigbang.png "Face detection in MeVisLab using OpenCV") ## Summary * This is just one example for using OpenCV in MeVisLab. You will find lots of other examples and tutorials online, we just wanted to show one possibility. diff --git a/mevislab.github.io/content/tutorials/thirdparty/pytorch.md b/mevislab.github.io/content/tutorials/thirdparty/pytorch.md index ffa308ebe..bcd542403 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/pytorch.md +++ b/mevislab.github.io/content/tutorials/thirdparty/pytorch.md @@ -16,12 +16,12 @@ menu: # PyTorch {#pytorch} ## Introduction -[PyTorch](http://www.pytorch.org "pytorch") is a machine learning framework based on the Torch library, used for applications such as Computer Vision and Natural Language Processing, originally developed by Meta AI and now part of the Linux Foundation umbrella. +[PyTorch](https://www.pytorch.org "PyTorch") is a machine learning framework based on the Torch library, used for applications such as Computer Vision and Natural Language Processing, originally developed by Meta AI and now part of the Linux Foundation umbrella. -A lot of AI frameworks can be used within MeVisLab. We currently do not provide a preintegrated AI framework though as we try to avoid compatibility issues, and AI frameworks are very fast-moving by nature. +A lot of AI frameworks can be used within MeVisLab. We do not provide a preintegrated AI framework though as we try to avoid compatibility issues, and AI frameworks are very fast-moving by nature. Maybe also take a look at: -* [TensorFlow](https://www.tensorflow.org "tensorflow") +* [TensorFlow](https://www.tensorflow.org "Tensorflow") * [Keras](https://keras.io "Keras") * [scikit-learn](https://scikit-learn.org "scikit-learn") @@ -38,5 +38,5 @@ The first example shows how to install *torch* and *torchvision* by using the Me In this example, we are using a pretrained network from [torch.hub](https://pytorch.org/hub/) to generate an AI based image overlay of a brain parcellation map. ### Segment Persons in Webcam Videos -The second tutorial adapts the [Example 2: Face Detection with OpenCV](tutorials/thirdparty/opencv/thirdpartyexample2/ "Example 2: Face Detection with OpenCV") to segment a person shown in a webcam stream. The network has been taken from [torchvision](https://pytorch.org/vision/stable/index.html). +The second tutorial adapts the [Example 2: Face Detection With OpenCV](tutorials/thirdparty/opencv/thirdpartyexample2/ "Example 2: Face Detection With OpenCV") to segment a person shown in a webcam stream. The network has been taken from [torchvision](https://pytorch.org/vision/stable/index.html). diff --git a/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample1.md b/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample1.md index 879c85f60..0c23ce8a1 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample1.md +++ b/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample1.md @@ -13,7 +13,7 @@ menu: parent: "pytorch" --- -# Example 1: Installing PyTorch using the PythonPip module +# Example 1: Installing PyTorch Using the PythonPip Module ## Introduction The module `PythonPip` allows you to install additional Python packages to be used in MeVisLab. @@ -39,19 +39,19 @@ Double-click {{< mousebutton "left" >}} the module and inspect the panel. ![PythonPip panel](images/tutorials/thirdparty/pytorch_example1_2.png "PythonPip panel") -The panel shows all currently installed Python packages including their version and the MeVisLab package they are saved in. You can see a warning that the target package is set to read-only in the case you are selecting a MeVisLab package. Changing to one of your user packages (see [Example 2.1: Package creation](tutorials/basicmechanisms/macromodules/package/) for details) makes the warning disappear. +The panel shows all currently installed Python packages including their version and the MeVisLab package they are saved in. You can see a warning that the target package is set to read-only in the case you are selecting a MeVisLab package. Changing to one of your user packages (see [Example 2.1: Package Creation](tutorials/basicmechanisms/macromodules/package/) for details) makes the warning disappear. ![Select user package](images/tutorials/thirdparty/pytorch_example1_3.png "Select user package") {{}} -Additional information on the `PythonPip` module can be found in [Example 4: Install additional Python packages via PythonPip module](tutorials/basicmechanisms/macromodules/pythonpip "PythonPip module"). +Additional information on the `PythonPip` module can be found in [Example 4: Install Additional Python Packages via PythonPip Module](tutorials/basicmechanisms/macromodules/pythonpip "PythonPip Module"). {{}} #### Install Torch and Torchvision -For our tutorials, we need to install *torch* and *torchvision*. Enter *torch torchvision* into the *Command* textbox and press *Install*. +For our tutorials, we need to install *torch* and *torchvision*. Enter *torch torchvision* into the >Command textbox and press Install*. {{}} -We are using the CPU version of PyTorch for our tutorials as we want them to be as accessible as possible. If you happen to have a large GPU capacity (and CUDA support), you can also use the GPU version. You can install the necessary packages by using the PyTorch documentation available [here](https://pytorch.org/get-started/locally "PyTorch documentation"). +We are using the CPU version of PyTorch for our tutorials as we want them to be as accessible as possible. If you happen to have a large GPU capacity (and CUDA support), you can also use the GPU version. You can install the necessary packages by using the PyTorch documentation available [here](https://pytorch.org/get-started/locally "PyTorch Documentation"). {{}} Continuing with CUDA support: @@ -70,7 +70,7 @@ Alternatively, you can also add a parameter to *pip install* command: *--proxy h ![Install torch and torchvision](images/tutorials/thirdparty/pytorch_example1_4.png "Install torch and torchvision") -After clicking *Install*, the *pip* console output opens and you can follow the process of the installation. +After clicking Install, the *pip* console output opens and you can follow the process of the installation. ![Python pip output](images/tutorials/thirdparty/pytorch_example1_5.png "Python pip output") diff --git a/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample2.md b/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample2.md index 0dedd9a8a..b8ad1d9d6 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample2.md +++ b/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample2.md @@ -21,11 +21,11 @@ In this example, you are using a pretrained PyTorch deep learning model (HighRes HighRes3DNet is a 3D residual network presented by Li et al. in [On the Compactness, Efficiency, and Representation of 3D Convolutional Networks: Brain Parcellation as a Pretext Task](https://link.springer.com/chapter/10.1007/978-3-319-59050-9_28). ## Steps to Do -Add a `LocalImage` module to your workspace and select the file *MRI_Head.dcm*. For PyTorch it is necessary to resample the data to a defined size. Add a `Resample3D` module to the `LocalImage` and open the panel. Change *Keep Constant* to *Voxel Size* and define *Image Size* as 176, 217, 160. +Add a `LocalImage` module to your workspace and select the file *MRI_Head.dcm*. For PyTorch, it is necessary to resample the data to a defined size. Add a `Resample3D` module to the `LocalImage` and open the panel. Change Keep Constant to *Voxel Size* and define Image Size as *176, 217, 160*. ![Resample3D module](images/tutorials/thirdparty/pytorch_example2_1.png "Resample3D module"). -The coordinates in PyTorch are also a little bit different than in MeVisLab; therefore, you have to rotate the image. Add an `OrthoSwapFlip` module and connect it to the `Resample3D` module. Change *View* to *Other* and set *Orientation* to *YXZ*. Also check *Flip horizontal*, *Flip vertical*, and *Flip depth*. *Apply* your changes. +The coordinates in PyTorch are also a little bit different than in MeVisLab; therefore, you have to rotate the image. Add an `OrthoSwapFlip` module and connect it to the `Resample3D` module. Change View to *Other* and set Orientation to *YXZ*. Also check Flip horizontal, Flip vertical, and Flip depth. Apply your changes. ![OrthoSwapFlip module](images/tutorials/thirdparty/pytorch_example2_2.png "OrthoSwapFlip module"). @@ -69,9 +69,9 @@ Commands { ``` {{}} -In MATE, right-click {{< mousebutton "right" >}} the Project Workspace and add a new file *DemoAI.py* to your project. The workspace now contains an empty Python file. +In MATE, right-click {{< mousebutton "right" >}} the *Project Workspace* and add a new file *DemoAI.py* to your project. The workspace now contains an empty Python file. -![Project Workspace](images/tutorials/thirdparty/pytorch_example2_5.png "Project Workspace"). +![Project workspace](images/tutorials/thirdparty/pytorch_example2_5.png "Project workspace"). Switch back to MeVisLab IDE, right-click {{< mousebutton "right" >}} the local macro, and select {{< menuitem "Reload Definition">}}. Your new input and output interface is now available and you can connect images to your module. @@ -85,7 +85,7 @@ Add a `LoadBase` module and connect it to a `SoMLLUT` module. The `SoMLLUT` need ![Final network](images/tutorials/thirdparty/pytorch_example2_7.png "Final network"). {{}} -If your PC is equipped with less than 16GBs of RAM/working memory, we recommend to add a `SubImage` module between the `OrthoSwapFlip` and the `Resample3D` module. You should configure less slices in the z-direction to prevent your system from running out of memory. +If your PC is equipped with less than 16GB of RAM/working memory, we recommend to add a `SubImage` module between the `OrthoSwapFlip` and the `Resample3D` module. You should configure less slices in the z-direction to prevent your system from running out of memory. ![SubImage module](images/tutorials/thirdparty/pytorch_example2_7b.png "SubImage module"). {{}} @@ -109,7 +109,7 @@ Commands { ``` {{}} -The *FieldListener* always calls the Python function *onStart* when the *Trigger* *start* is touched. We now need to implement the Python function. Right-click {{< mousebutton "right" >}} the command *onStart* and select {{< menuitem "Create Python Function 'onStart'">}}. +The *FieldListener* always calls the Python function onStart when the *Trigger* *start* is touched. We now need to implement the Python function. Right-click {{< mousebutton "right" >}} the command onStart and select {{< menuitem "Create Python Function 'onStart'">}}. The Python file opens automatically and the function is created. @@ -151,7 +151,7 @@ When executing your Python script for the first time, you will get a ScriptError {{}} {{}} -The script uses the CPU; in the case you want to use CUDA, you can replace the line *device = torch.device("cpu")* with: *device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')* +The script uses the CPU; in the case you want to use CUDA, you can replace the line device = torch.device("cpu") with device = torch.device('cuda' if torch.cuda.is_available() else 'cpu'). {{}} The function does the following: @@ -164,13 +164,13 @@ The function does the following: ## Execute the Segmentation Change the alpha value of your `SoView2DOverlayMPR` to have a better visualization of the results. -Switch back to the MeVisLab IDE and select your module `DemoAI`. In *Module Inspector*, click *Trigger* for *start* and wait a little bit until you can see the results. +Switch back to the MeVisLab IDE and select your module `DemoAI`. In the Module Inspector, click *Trigger* for *start* and wait a little bit until you can see the results. ![Final result](images/tutorials/thirdparty/pytorch_example2_9.png "Final result"). Without adding a `SubImage`, the segmentation results should look like this: -![Results](images/tutorials/thirdparty/pytorch_example2_10.png "Results"). +![Result without SubImage](images/tutorials/thirdparty/pytorch_example2_10.png "Result without SubImage"). ## Summary * Pretrained PyTorch networks can be used directly in MeVisLab via `PythonImage` module. diff --git a/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample3.md b/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample3.md index 28f1589d6..ac65cf023 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample3.md +++ b/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample3.md @@ -16,7 +16,7 @@ menu: # Example 3: Segment Persons in Webcam Videos ## Introduction -This tutorial is based on [Example 2: Face Detection with OpenCV](tutorials/thirdparty/opencv/thirdpartyexample2 "Example 2: Face Detection with OpenCV"). You can reuse some of the scripts already developed in the other tutorial. +This tutorial is based on [Example 2: Face Detection With OpenCV](tutorials/thirdparty/opencv/thirdpartyexample2 "Example 2: Face Detection With OpenCV"). You can reuse some of the scripts already developed in the other tutorial. ## Steps to Do Add the macro module developed in the previous example to your workspace. @@ -25,24 +25,24 @@ Add the macro module developed in the previous example to your workspace. Open the internal network of the module via middle mouse button {{< mousebutton "middle" >}} and right-click {{< mousebutton "right" >}} on the tab of the workspace showing the internal network. Select *Show Enclosing Folder*. -![Show Enclosing Folder](images/tutorials/thirdparty/pytorch_example3_2.png "Show Enclosing Folder") +![Context menu: Show Enclosing Folder](images/tutorials/thirdparty/pytorch_example3_2.png "Context menu: Show Enclosing Folder") The file browser opens showing the files of your macro module. Copy the *.mlab* file somewhere you can remember. ### Create the Macro Module -Open the the Project Wizard via {{< menuitem "File" "Run Project Wizard">}} and select *Macro Module*. Click *Run Wizard*. +Open the the Project Wizard via {{< menuitem "File" "Run Project Wizard">}} and select *Macro Module*. Click Run Wizard. -![Project Wizard](images/tutorials/thirdparty/pytorch_example3_3.png "Project Wizard") +![Project Wizard panel](images/tutorials/thirdparty/pytorch_example3_3.png "Project Wizard panel") -Define the module properties as shown below, although you can choose your own name. Click *Next*. +Define the module properties as shown below, although you can choose your own name. Click Next >. -![Module Properties](images/tutorials/thirdparty/pytorch_example3_4.png "Module Properties") +![Module Properties panel](images/tutorials/thirdparty/pytorch_example3_4.png "Module Properties panel") -Define the module properties and select the copied *.mlab* file. Make sure to add a Python file and click *Next*. +Define the module properties and select the copied *.mlab* file. Make sure to add a Python file and click Next >. -![Macro Module Properties](images/tutorials/thirdparty/pytorch_example3_5.png "Macro Module Properties") +![Macro Module Properties panel](images/tutorials/thirdparty/pytorch_example3_5.png "Macro Module Properties panel") -Leave the module field reference as is and click *Create*. Close Project Wizard and select {{< menuitem "Extras" "Reload Module Database (Clear Cache)">}}. +Leave the module field reference as is and click Create. Close Project Wizard and select {{< menuitem "Extras" "Reload Module Database (Clear Cache)">}}. ### Script and Python Code Open the script file of the `WebcamTest` module and copy the contents to your new PyTorch module. The result should be something like this: @@ -93,7 +93,6 @@ Window { If you open the panel of your new module, you can see the UI elements added. You cannot use the buttons, because they require the Python function called. Copy the Python code to your new module, too. - {{< highlight filename="PyTorchSegmentationExample.py" >}} ```Python # from mevis import * @@ -149,7 +148,7 @@ def releaseCamera(_): ``` {{}} -You should now have the complete functionality of the [Example 2: Face Detection with OpenCV](tutorials/thirdparty/opencv/thirdpartyexample2 "Example 2: Face Detection with OpenCV"). +You should now have the complete functionality of the [Example 2: Face Detection With OpenCV](tutorials/thirdparty/opencv/thirdpartyexample2 "Example 2: Face Detection With OpenCV"). ### Adapt the Network For *PyTorch*, we require some additional modules in our network. Open the internal network of your module and add another `PythonImage` module. Connect a `Resample3D` and an `ImagePropertyConvert` module. @@ -182,7 +181,7 @@ import torch ``` {{}} -Additionally, remove the *face_cascade* parameter from your Python code. This was necessary for detecting a face in OpenCV and is not necessary anymore in PyTorch. The only parameters you need here are: +Additionally, remove the face_cascade parameter from your Python code. This was necessary for detecting a face in OpenCV and is not necessary anymore in PyTorch. The only parameters you need here are: {{< highlight filename="PyTorchSegmentationExample.py" >}} ```Python @@ -191,7 +190,7 @@ camera = None ``` {{}} -You can also remove the OpenCV-specific lines in *grabImage*. The function should look like this now: +You can also remove the OpenCV-specific lines in grabImage. The function should look like this now: {{< highlight filename="PyTorchSegmentationExample.py" >}} ```Python @@ -202,7 +201,7 @@ def grabImage(): ``` {{}} -Adapt the function *releaseCamera* and remove the line *cv2.destroyAllWindows()*. +Adapt the function releaseCamera and remove the line cv2.destroyAllWindows(). {{< highlight filename="PyTorchSegmentationExample.py" >}} ```Python @@ -218,7 +217,7 @@ def releaseCamera(_): {{}} ### Implement PyTorch Segmentation -The first thing we need is a function for starting the camera. It closes the previous segmentation and calls the existing function *startCapture*. +The first thing we need is a function for starting the camera. It closes the previous segmentation and calls the existing function startCapture. {{< highlight filename="PyTorchSegmentationExample.py" >}} ```Python @@ -241,7 +240,7 @@ Button { ``` {{}} -Now, your new function *startWebcam* is called whenever touching the left button. As a next step, define a Python function *segmentSnapshot*. We are using a pretrained network from Torchvision. In the case you want to use other PyTorch possibilities, you can find lots of examples on their [website](https://pytorch.org/tutorials/). +Now, your new function startWebcam is called whenever touching the left button. As a next step, define a Python function segmentSnapshot. We are using a pretrained network from Torchvision. In the case you want to use other PyTorch possibilities, you can find lots of examples on their [website](https://pytorch.org/tutorials/). {{< highlight filename="PyTorchSegmentationExample.py" >}} ```Python @@ -292,7 +291,7 @@ Button { ``` {{}} -In step 5, we selected the class *person*. Whenever you click *Segment Snapshot*, PyTorch will try to segment all persons in the video. +In step 5, we selected the class *person*. Whenever you click Segment Snapshot, PyTorch will try to segment all persons in the video. {{}} The following classes are available: @@ -320,7 +319,7 @@ The following classes are available: The final result of the segmentation should be a semitransparent red overlay of the persons segmented in your webcam stream. -![Final Segmentation result](images/tutorials/thirdparty/pytorch_example3_10.png "Final Segmentation result") +![Final segmentation result](images/tutorials/thirdparty/pytorch_example3_10.png "Final segmentation result") ## Summary * You can install additional Python AI packages by using the `PythonPip` module. diff --git a/mevislab.github.io/content/tutorials/visualization.md b/mevislab.github.io/content/tutorials/visualization.md index 39398caa3..e891b0711 100644 --- a/mevislab.github.io/content/tutorials/visualization.md +++ b/mevislab.github.io/content/tutorials/visualization.md @@ -33,7 +33,7 @@ An easy way to display data and images in 2D and 3D is by using them modules `Vi 2. Change the contrast of the image by clicking the right mouse button {{< mousebutton "right" >}} and moving the mouse. -3. Zoom in and out by pressing {{< keyboard "CTRL" >}} and middle mouse button {{< mousebutton "middle" >}}. +3. Zoom in and out by pressing {{< keyboard "Ctrl" >}} and middle mouse button {{< mousebutton "middle" >}}. 4. Toggle between multiple timepoints (if available) via {{< keyboard "ArrowLeft" >}} and {{< keyboard "ArrowRight" >}}. @@ -55,7 +55,3 @@ The `View2DExtensions` module provides additional ways to interact with an image 5. More features, like recording movies, can be found on the help page. 6. Toggle between multiple timepoints (if available) via {{< keyboard "ArrowLeft" >}} and {{< keyboard "ArrowRight" >}}. - -{{}} -More information on Image Processing in MeVisLab can be found {{< docuLinks "/Resources/Documentation/Publish/SDK/GettingStarted/ch12.html" "here" >}} -{{}} diff --git a/mevislab.github.io/content/tutorials/visualization/pathtracer/pathtracerexample1.md b/mevislab.github.io/content/tutorials/visualization/pathtracer/pathtracerexample1.md index 3ef5c38bf..9f3d6dd27 100644 --- a/mevislab.github.io/content/tutorials/visualization/pathtracer/pathtracerexample1.md +++ b/mevislab.github.io/content/tutorials/visualization/pathtracer/pathtracerexample1.md @@ -18,10 +18,12 @@ menu: {{< youtube "E0H87Cimu_M">}} ## Introduction -In this example, you develop a network to show some differences between volume rendering and the MeVisLab Path Tracer. You will visualize the same scene using both 3D rendering techniques and some of the modules for path tracing. +In this example, you develop a network to show some differences between volume rendering and the **MeVis Path Tracer**. You will visualize the same scene using both 3D rendering techniques and some of the modules for path tracing. + + {{}} -The MeVis Path Tracer requires an NVIDIA graphics card with CUDA support. In order to check your hardware, open MeVisLab and add a `SoPathTracer` module to your workspace. You will see a message if your hardware does not support CUDA: +The **MeVis Path Tracer** requires an NVIDIA graphics card with CUDA support. In order to check your hardware, open MeVisLab and add a `SoPathTracer` module to your workspace. You will see a message if your hardware does not support CUDA: *MeVisLab detected an Intel onboard graphics adapter. If you experience rendering problems, try setting the environment variables SOVIEW2D_NO_SHADERS and GVR_NO_GLSL.*
*Handling cudaGetDeviceCount returned 35 (CUDA driver version is insufficient for CUDA runtime version)* @@ -33,26 +35,26 @@ As a first step for comparison, you are creating a 3D scene with two spheres usi ### Volume Rendering #### Create 3D Objects -Add three `WEMInitialize` modules for one *Cube* and two *Icosphere* to your workspace and connect each of them to a `SoWEMRenderer`. Set *instanceName* of the `WEMInitialize` modules to *Cube*, *Sphere1*, and *Sphere2*. Set *instanceName* of the `SoWEMRenderer` modules to and *RenderCube*, *RenderSphere1*, and *RenderSphere2*. +Add three `WEMInitialize` modules for one *Cube* and two *Icosphere* to your workspace and connect each of them to a `SoWEMRenderer`. Set instanceName of the `WEMInitialize` modules to *Cube*, *Sphere1*, and *Sphere2*. Set instanceName of the `SoWEMRenderer` modules to and *RenderCube*, *RenderSphere1*, and *RenderSphere2*. -For *RenderSphere1*, define a *Diffuse Color* *yellow* and set *Face Alpha* to *0.5*. The *RenderCube* remains as is and the *RenderSphere2* is defined as *Diffuse Color* *red* and *Face Alpha* *0.5*. +For *RenderSphere1*, define a Diffuse Color *yellow* and set Face Alpha to *0.5*. The *RenderCube* remains as is and the *RenderSphere2* is defined as Diffuse Color *red* and Face Alpha *0.5*. Group your modules and name the group *Initialization*. Your network should now look like this: -![Example Initialization](images/tutorials/visualization/pathtracer/Example1_1.png "Example Initialization") +![Example initialization](images/tutorials/visualization/pathtracer/Example1_1.png "Example initialization") Use the Output Inspector for your `SoWEMRenderer` outputs and inspect the 3D rendering. You should have a yellow and a red sphere, and a grey cube. {{< imagegallery 3 "images/tutorials/visualization/pathtracer" "Sphere1" "Sphere2" "Cube" >}} #### Rendering -Add 2 `SoGroup` modules and a `SoBackground` to your network. Connect the modules as seen below. +Add two `SoGroup` modules and one `SoBackground` to your network. Connect the modules as seen below. -![Example Group](images/tutorials/visualization/pathtracer/Example1_2.png "Example Group") +![Example group](images/tutorials/visualization/pathtracer/Example1_2.png "Example group") If you now inspect the output of the `SoGroup`, you will see an orange sphere. -![Missing Translation](images/tutorials/visualization/pathtracer/Example1_3.png "Missing Translation") +![Missing translation](images/tutorials/visualization/pathtracer/Example1_3.png "Missing translation") You did not translate the locations of the three objects; they are all located at the same place in world coordinates. Open the `WEMInitialize` panels of your 3D objects and define the following translations and scalings: @@ -60,11 +62,11 @@ You did not translate the locations of the three objects; they are all located a The result of the `SoGroup` now shows two spheres on a rectangular cube. -![Objects Translated and Scaled](images/tutorials/visualization/pathtracer/Example1_4.png "Objects Translated and Scaled") +![Objects translated and scaled](images/tutorials/visualization/pathtracer/Example1_4.png "Objects translated and scaled") For the viewer, you now add a `SoCameraInteraction`, a `SoDepthPeelRenderer`, and a `SoRenderArea` module to your network and connect them. -![Network with Viewer](images/tutorials/visualization/pathtracer/Example1_5.png "Network with Viewer") +![Network with viewer](images/tutorials/visualization/pathtracer/Example1_5.png "Network with viewer") You now have a 3D volume rendering of our three objects. @@ -72,59 +74,59 @@ In order to distinguish between the two viewers, you now add a label to the `SoR ![SoMenuItem](images/tutorials/visualization/pathtracer/Example1_6.png "SoMenuItem") -Define the *Label* of the `SoMenuItem` as *Volume Rendering* and set *Border Alignment* to *Top Right* and *Menu Direction* to *Horizontal* for the `SoBorderMenu`. +Define the Label of the `SoMenuItem` as *Volume Rendering* and set Border Alignment to *Top Right* and Menu Direction to *Horizontal* for the `SoBorderMenu`. ![Label in SoRenderArea](images/tutorials/visualization/pathtracer/Example1_7.png "Label in SoRenderArea") Finally, you should group all modules belonging to your volume rendering. -![Volume Rendering Network](images/tutorials/visualization/pathtracer/Example1_8.png "Volume Rendering Network") +![Volume rendering network](images/tutorials/visualization/pathtracer/Example1_8.png "Volume rendering network") ### Path Tracing -For the Path Tracer, you can just reuse our 3D objects from volume rendering. This helps us to compare the rendering results. +For the path tracer, you can just reuse our 3D objects from volume rendering. This helps us to compare the rendering results. #### Rendering -Path Tracer modules fully integrate into MeVisLab Open Inventor; therefore, the general principles and the necessary modules are not completely different. Add a `SoGroup` module to your workspace and connect it to your 3D objects from `SoWEMRenderer`. A `SoBackground` as in volume rendering network is not necessary but you add a `SoPathTracerMaterial` and connect it to the `SoGroup`. You can leave all settings as default for now. +Path tracer modules fully integrate into MeVisLab Open Inventor; therefore, the general principles and the necessary modules are not completely different. Add a `SoGroup` module to your workspace and connect it to your 3D objects from `SoWEMRenderer`. A `SoBackground` as in volume rendering network is not necessary but you add a `SoPathTracerMaterial` and connect it to the `SoGroup`. You can leave all settings as default for now. -![Path Tracer Material](images/tutorials/visualization/pathtracer/Example1_9.png "Path Tracer Material") +![Path tracer material](images/tutorials/visualization/pathtracer/Example1_9.png "Path tracer material") -Add a `SoPathTracerAreaLight`, a `SoPathTracerMesh`, and a `SoPathTracer` to a `SoSeparator` and connect the `SoPathTracerMesh` to your `SoGroup`. This adds your 3D objects to a Path Tracer Scene. +Add a `SoPathTracerAreaLight`, a `SoPathTracerMesh`, and a `SoPathTracer` to a `SoSeparator` and connect the `SoPathTracerMesh` to your `SoGroup`. This adds your 3D objects to a path tracer scene. -![Path Tracer](images/tutorials/visualization/pathtracer/Example1_10.png "Path Tracer") +![Path tracer network](images/tutorials/visualization/pathtracer/Example1_10.png "Path tracer network") -Selecting the `SoSeparator` output already shows a preview of the same scene rendered via Path Tracing. +Selecting the `SoSeparator` output already shows a preview of the same scene rendered via path tracing. -![Path Tracer Preview](images/tutorials/visualization/pathtracer/Example1_11.png "Path Tracer Preview") +![Path tracer preview](images/tutorials/visualization/pathtracer/Example1_11.png "Path tracer preview") Add a `SoCameraInteraction` and a `SoRenderArea` to your network and connect them as seen below. -![SoCameraInteraction](images/tutorials/visualization/pathtracer/Example1_12.png "SoCameraInteraction") +![Added SoCameraInteraction](images/tutorials/visualization/pathtracer/Example1_12.png "Added SoCameraInteraction") -You can now use both `SoRenderArea` modules to visualize the differences side by side. You should also add the `SoMenuItem`, a `SoBorderMenu`, and a `SoSeparator` to your `SoRenderArea` in order to have a label for Path Tracing inside the viewer. +You can now use both `SoRenderArea` modules to visualize the differences side by side. You should also add the `SoMenuItem`, a `SoBorderMenu`, and a `SoSeparator` to your `SoRenderArea` in order to have a label for path tracing inside the viewer. -Define the *Label* of the `SoMenuItem` as *Path Tracing* and set *Border Alignment* to *Top Right* and *Menu Direction* to *Horizontal* for the `SoBorderMenu`. +Define the Label of the `SoMenuItem` as *Path Tracing* and set Border Alignment to *Top Right* and Menu Direction to *Horizontal* for the `SoBorderMenu`. -![Label in SoRenderArea](images/tutorials/visualization/pathtracer/Example1_13.png "Label in SoRenderArea") +![SoMenuItem as a label in SoRenderArea](images/tutorials/visualization/pathtracer/Example1_13.png "SoMenuItem as a label in SoRenderArea") -Finally, group your Path Tracer modules to another group named *Path Tracing*. +Finally, group your path tracer modules to another group named *Path Tracing*. -![New Group for Path Tracing](images/tutorials/visualization/pathtracer/Example1_14.png "New Group for Path Tracing") +![New group for path tracing](images/tutorials/visualization/pathtracer/Example1_14.png "New group for path tracing") -![Side by Side](images/tutorials/visualization/pathtracer/Example1_15.png "Side by Side") +![Side by side: volume rendering vs. path tracing](images/tutorials/visualization/pathtracer/Example1_15.png "Side by side: volume rendering vs. path tracing") ### Share the Same Camera -Finally, you want to have the same camera perspective in both viewers, so that you can see the differences. Add a `SoPerspectiveCamera` module to your workspace and connect it to the volume rendering and the Path Tracer network. The Path Tracer network additionally needs a SoGroup, see below for connection details. You have to toggle *detectCamera* in both of your `SoCameraInteraction` modules in order to synchronize the view for both `SoRenderArea` viewers. +Finally, you want to have the same camera perspective in both viewers, so that you can see the differences. Add a `SoPerspectiveCamera` module to your workspace and connect it to the volume rendering and the path tracer network. The Path Tracer network additionally needs a SoGroup, see below for connection details. You have to trigger Detect Camera From Scene in both of your `SoCameraInteraction` modules in order to synchronize the view for both `SoRenderArea` viewers. -![Camera Synchronization](images/tutorials/visualization/pathtracer/Example1_16.png "Camera Synchronization") +![Camera synchronization by sharing a camera](images/tutorials/visualization/pathtracer/Example1_16.png "Camera synchronization by sharing a camera") {{}} -Path Tracing requires a lot of iterations before reaching the best possible result. You can see the maximum number of iterations defined and the current iteration in the `SoPathTracer` panel. The more iterations, the better the result but the more time it takes to finalize your image. +Path tracing requires a lot of iterations before reaching the best possible result. You can see the maximum number of iterations defined and the current iteration in the `SoPathTracer` panel. The more iterations, the better the result but the more time it takes to finalize your image. {{}} {{< imagegallery 3 "images/tutorials/visualization/pathtracer" "PathTracer_1_Iteration" "PathTracer_100_Iterations" "PathTracer_1000_Iterations" >}} ## Results -Path Tracing provides a much more realistic way to visualize the behavior of light in a scene. It simulates the scattering and absorption of light within the volume. +Path tracing provides a much more realistic way to visualize the behavior of light in a scene. It simulates the scattering and absorption of light within the volume. ## Exercises 1. Play around with different `SoPathTracerMaterial` settings and define different materials. @@ -132,8 +134,8 @@ Path Tracing provides a much more realistic way to visualize the behavior of lig 3. Change the configurations in `SoPathTracerAreaLight` module. ## Summary -* Path Tracer modules can be used the same way as Open Inventor modules. +* Path tracer modules can be used the same way as Open Inventor modules. * A `SoPerspectiveCamera` can be used for multiple viewers to synchronize camera position. -* Path Tracing produces beautiful, photorealistic renderings, but can be computationally expensive. +* Path tracing produces beautiful, photorealistic renderings but can be computationally expensive. {{< networkfile "examples/visualization/example6/pathtracer1.mlab" >}} diff --git a/mevislab.github.io/content/tutorials/visualization/pathtracer/pathtracerexample2.md b/mevislab.github.io/content/tutorials/visualization/pathtracer/pathtracerexample2.md index 837e593f9..7246ff92b 100644 --- a/mevislab.github.io/content/tutorials/visualization/pathtracer/pathtracerexample2.md +++ b/mevislab.github.io/content/tutorials/visualization/pathtracer/pathtracerexample2.md @@ -16,12 +16,15 @@ menu: # Example 6.2: Visualization Using SoPathTracer ## Introduction -In this tutorial, we will explain the basics of using the `SoPathTracer` module in MeVisLab. You will learn how to create a scene, assign materials, add light sources, and configure the PathTracer to generate enhanced renderings. +In this tutorial, we will explain the basics of using the `SoPathTracer` module in MeVisLab. You will learn how to create a scene, assign materials, add light sources, and configure the **MeVis Path Tracer** to generate enhanced renderings. + + {{}} -The MeVis Path Tracer requires an NVIDIA graphics card with CUDA support. In order to check your hardware, open MeVisLab and add a `SoPathTracer` module to your workspace. You will see a message if your hardware does not support CUDA: +The **MeVis Path Tracer** requires an NVIDIA graphics card with CUDA support. In order to check your hardware, open MeVisLab and add a `SoPathTracer` module to your workspace. You will see a message if your hardware does not support CUDA: + +*MeVisLab detected an Intel onboard graphics adapter. If you experience rendering problems, try setting the environment variables SOVIEW2D_NO_SHADERS and GVR_NO_GLSL.*
-*MeVisLab detected an Intel onboard graphics adapter. If you experience rendering problems, try setting the environment variables SOVIEW2D_NO_SHADERS and GVR_NO_GLSL.*
*Handling cudaGetDeviceCount returned 35 (CUDA driver version is insufficient for CUDA runtime version)* {{
}} @@ -30,11 +33,11 @@ The MeVis Path Tracer requires an NVIDIA graphics card with CUDA support. In ord ### Develop Your Network Download and open the [images](examples/visualization/example6/Volume_1.mlimage) by using a `LocalImage` module. Connect it to a `View2D` to visually inspect its contents. -![MR Image of Knee](images/tutorials/visualization/pathtracer/V6.2_1.png "MR Image of Knee in 2D") +![MR image of a knee in 2D](images/tutorials/visualization/pathtracer/V6.2_1.png "MR image of a knee in 2D") Replace the `View2D` module by a `SoExaminerViewer`. Add the modules `SoPathTracerVolume` and `SoPathTracer` to your workspace and connect them as seen below. -The `SoPathTracerVolume` enables the loading and transforming the data into renderable volumes for Path Tracing. The `SoPathTracer` is the main rendering module of the MeVis Path Tracer framework. It provides a much more realistic way to visualize the behavior of light in a scene. It simulates the scattering and absorption of light within the volume. +The `SoPathTracerVolume` enables the loading and transforming the data into renderable volumes for path tracing. The `SoPathTracer` is the main rendering module of the **MeVis Path Tracer** framework. It provides a much more realistic way to visualize the behavior of light in a scene. It simulates the scattering and absorption of light within the volume. {{}} It's essential to consistently position the `SoPathTracer `module on the right side of the scene. This strategic placement ensures that the module can render all objects located in the scene before it accurately. @@ -50,11 +53,11 @@ Now, connect the `SoLUTEditor` module to your `SoPathTracerVolume` as illustrate ![SoLUTEditor](images/tutorials/visualization/pathtracer/SoLUTEditor1.png "SoLUTEditor") -Add a `MinMaxScan` module to the `LocalImage` module and open the panel. The module shows the actual minimal and maximal gray values of the volume. +Add a `MinMaxScan` module to the `LocalImage` module and open the panel. The module shows the actual minimal and maximal gray values of the volume image. -Open the panel of the `SoLUTEditor` module and define Range between *0* and *2047* as calculated by the `MinMaxScan`. +Open the panel of the `SoLUTEditor` module and define *Range* between *0* and *2047* as calculated by the `MinMaxScan`. -![SoLUTEditor](images/tutorials/visualization/pathtracer/Range_MinMaxScan.png "MinMaxScan") +![MinMaxScan](images/tutorials/visualization/pathtracer/Range_MinMaxScan.png "MinMaxScan") Next, add lights to your scene. Connect a `SoPathTracerAreaLight` and a `SoPathTracerBackgroundLight` module to your `SoExaminerViewer` to improve scene lighting. @@ -99,7 +102,7 @@ Load the [Bones mask](examples/visualization/example6/edited_Bones.mlimage) by u ![Bones mask](images/tutorials/visualization/pathtracer/View2D_Bones.png "Bones mask") -Start by disabling the visibility of your first volume by toggeling `SoPathTracerVolume` Enabled field off. This helps to improve the rendering of the bones itself and makes it easier to define colors for your LUT. +Start by disabling the visibility of your first volume by toggling `SoPathTracerVolume` Enabled field off. This helps to improve the rendering of the bones itself and makes it easier to define colors for your LUT. #### Load Example LUT from File Once again, you can decide to define the LUT yourself in `SoLUTEditor` module, or load a prepared XML File in a `LUTLoad` module as provided [here](examples/visualization/example6/LUT_Bones.xml). @@ -107,15 +110,15 @@ Once again, you can decide to define the LUT yourself in `SoLUTEditor` module, o #### Manually Define LUT If you want to define your own LUT, connect a `MinMaxScan` module to your `LocalImage1` and define the *Range* for the `SoLUTEditor` as already done before. -![MinMaxScan of Bones mask](images/tutorials/visualization/pathtracer/MinMaxScan_Bones.png "MinMaxScan of Bones mask") +![MinMaxScan of bones mask](images/tutorials/visualization/pathtracer/MinMaxScan_Bones.png "MinMaxScan of bones mask") -Open the panel of `SoLUTEditor1` for the bones and go to tab *Range* and set *New Range Min* to *0* and *New Range Max* to *127*. Define the following colors in the tab *Editor*. +Open the panel of `SoLUTEditor1` for the bones and go to tab *Range* and set New Range Min to *0* and New Range Max to *127*. Define the following colors in the tab *Editor*. -![SoLUTEditor1](images/tutorials/visualization/pathtracer/V6.2_11_LUT_Bones.png "SoLUTEditor1") +![Using a SoLUTEditor for visualizing the bones](images/tutorials/visualization/pathtracer/V6.2_11_LUT_Bones.png "Using a SoLUTEditor for visualizing the bones") You can increase the Shininess of the bones and change the Diffuse color in the *Surface Brdf* tab within the `SoPathTracerMaterial1`. Also set Specular to *0.5*, Shininess to *0.904*, and Specular Intensity to *0.466*. -![SoPathTracerMaterial1](images/tutorials/visualization/pathtracer/V6.2_SoPathTracerMaterial.png "SoPathTracerMaterial1") +![Setting material properties with SoPathTracerMaterial](images/tutorials/visualization/pathtracer/V6.2_SoPathTracerMaterial.png "Setting material properties with SoPathTracerMaterial") ## Visualize Vessels Repeat the process for the vessels. Add another `LocalImage`, `SoPathTracerVolume`, `SoLUTEditor` (or `LUTLoad`), and `View2D` module as seen below. Load this [Vessels mask](examples/visualization/example6/edited_Vessels.mlimage) and check it using `View2D`. @@ -130,19 +133,19 @@ Connect the `MinMaxScan` to your `LocalImage2`. Access the `SoLUTEditor2` panel in the tab *Range* and set the New Range Min to *0* and the New Range Max to *255*. Additionally, modify the illustrated color settings within the *Editor* tab. -![Vessels](images/tutorials/visualization/pathtracer/MinMaxScan_Vessels.png "MinMaxScan of Vessels mask") +![MinMaxScan of vessels mask](images/tutorials/visualization/pathtracer/MinMaxScan_Vessels.png "MinMaxScan of vessels mask") -![SoLUTEditor2](images/tutorials/visualization/pathtracer/V6.2_SoLUTEditor1_Vessels.png "SoLUTEditor2") +![Using a SoLUTEditor for visualizing blood vessels](images/tutorials/visualization/pathtracer/V6.2_SoLUTEditor1_Vessels.png "Using a SoLUTEditor for visualizing blood vessels") -Now you should set your first volume visible again by toggling `SoPathTracerVolume` *Enabled* field to on. +Now you should set your first volume visible again by toggling `SoPathTracerVolume` Enabled field to on. -![Final Resul](images/tutorials/visualization/pathtracer/FinalResult.png "Final Result") +![Final result](images/tutorials/visualization/pathtracer/FinalResult.png "Final result") {{}} The resulting rendering in `SoExaminerViewer` might look different depending on your defined LUTs. {{}} -![Final Resul](images/tutorials/visualization/pathtracer/FinalResult2.png "Final Result with Enhanced Visualization") +![Final result with enhanced visualization](images/tutorials/visualization/pathtracer/FinalResult2.png "Final result with enhanced visualization") ## Summary: * You can generate photorealistic renderings using `SoPathTracer` and associated modules. diff --git a/mevislab.github.io/content/tutorials/visualization/visualizationexample1.md b/mevislab.github.io/content/tutorials/visualization/visualizationexample1.md index 863710478..b235ad13a 100644 --- a/mevislab.github.io/content/tutorials/visualization/visualizationexample1.md +++ b/mevislab.github.io/content/tutorials/visualization/visualizationexample1.md @@ -35,7 +35,7 @@ The `SynchroView2D` module is explained {{< docuLinks "/Standard/Documentation/P ### Develop Your Network Start the example by adding the module `LocalImage` to your workspace to load the example image *Tumor1_Head_t1.small.tif*. Next, add and connect the following modules as shown. -![SynchroView2D](images/tutorials/visualization/V1_01.png "SynchroView2D Viewer") +![SynchroView2D viewer](images/tutorials/visualization/V1_01.png "SynchroView2D viewer") ## Summary * Multiple images can be synchronized by the `SynchroView2D` module. diff --git a/mevislab.github.io/content/tutorials/visualization/visualizationexample2.md b/mevislab.github.io/content/tutorials/visualization/visualizationexample2.md index 84e79d603..57acaf994 100644 --- a/mevislab.github.io/content/tutorials/visualization/visualizationexample2.md +++ b/mevislab.github.io/content/tutorials/visualization/visualizationexample2.md @@ -8,7 +8,7 @@ tags: ["Beginner", "Tutorial", "Visualization", "2D", "Magnifier"] menu: main: identifier: "visualization_example2" - title: "Display an Image in Different Viewing Directions and Mark Locations in the Image for Creating a Magnifier From a Rectangle" + title: "Display an Image in Different Viewing Directions and Mark Locations in the Image for Creating a Magnifier from a Rectangle" weight: 560 parent: "visualization" --- @@ -22,14 +22,13 @@ Medical images are typically displayed in three different viewing directions (se Using the viewer `OrthoView2D`, you are able to decide which viewing direction you like to use. In addition to that, you have the opportunity to display all three orthogonal viewing directions simultaneously. Here, we like to display an image of the head in all three viewing directions and mark positions in the image. -![Body Planes](images/tutorials/visualization/V2_00.png "Body Planes") +![Body planes](images/tutorials/visualization/V2_00.png "Body planes") ## Steps to Do ### Develop Your Network In this example, use the module `LocalImage` to load the example *image MRI_Head.tif*. Now, connect the module `OrthoView2D` to the loaded image. The image is displayed in three orthogonal viewing directions. The yellow marker displays the same voxel in all three images. You can scroll through the slices in all three viewing directions. - {{}} In the case your image is black, change the *Window* and *Center* values by moving the mouse with right mouse button {{< mousebutton "right" >}} pressed. {{}} @@ -44,16 +43,16 @@ The module enables the selection of an image position via mouse click {{< mouseb ![SoView2DPosition](images/tutorials/visualization/V2_02.png "SoView2DPosition") ### SoView2DRectangle -Instead of points, we like to mark areas. In order to do that, replace the module `SoView2DPosition` with the module `SoView2DRectangle`. The module allows to add a rectangle to the image. Left-click {{< mousebutton "left" >}} on the image and draw a rectangle. In the `OthoView2D`, the rectangle is displayed in every viewing direction. +Instead of points, we like to mark areas. In order to do that, replace the module `SoView2DPosition` with the module `SoView2DRectangle`. The module allows to add a rectangle to the image. Left-click {{< mousebutton "left" >}} on the image and draw a rectangle. In the `OrthoView2D`, the rectangle is displayed in every viewing direction. ![SoView2DRectangle](images/tutorials/visualization/V2_03.png "SoView2DRectangle") ### Using a Rectangle to Build a Magnifier We like to use the module `SoView2DRectangle` to create a magnifier. In order to do that, add the following modules to your workspace and connect them as shown below. We need to connect the module `SoView2DRectangle` to a hidden input connector of the module `SynchroView2D`. To be able to do this, click on your workspace and afterward press {{< keyboard "SPACE" >}}. You can see that `SynchroView2D` possesses Open Inventor input connectors. You can connect your module `SoView2DRectangle` to one of these connectors. -![Hidden Inputs of SynchroView2D](images/tutorials/visualization/V2_05.png "Hidden Inputs of SynchroView2D") +![Hidden inputs of SynchroView2D](images/tutorials/visualization/V2_05.png "Hidden inputs of SynchroView2D") -![Connect Hidden Inputs of SynchroView2D](images/tutorials/visualization/V2_06.png "Connect Hidden Inputs of SynchroView2D") +![Connect hidden inputs of SynchroView2D](images/tutorials/visualization/V2_06.png "Connect hidden inputs of SynchroView2D") In addition to that, add two instances of the module `DecomposeVector3` to your network. In MeVisLab, different data types exist, for example, vectors, or single variables, which contain the data type float or integer. This module can be used to convert field values of type vector (in this case, a vector consisting of three entries) into three single coordinates. You will see in the next step why this module can be useful. @@ -67,11 +66,11 @@ Now, open the panels of the modules `SoView2DRectangle`, `DecomposeVector3`, and We rename the `DecomposeVector3` modules (press {{< keyboard "F2" >}} to do that) here for a better overview. -In the panel of the module `Rectangle` in the box *Position*, you can see the position of the rectangle given in two 3D vectors. +In the panel of the module `SoView2DRectangle`, in the box *Position*, you can see the world position of the rectangle given in two 3D vectors. We like to use the modules `DecomposeVector3` to extract the single x, y, and z values of the vector. For that, create a parameter connection from the field Start World Pos to the vector of the module we named `StartWorldPos_Rectangle` and create a connection from the field End World Pos to the vector of module `EndWorldPos_Rectangle`. The decomposed coordinates can be now used for further parameter connections. -![Parameter Connections](images/tutorials/visualization/V2_09.png "Parameter Connections") +![Parameter connections](images/tutorials/visualization/V2_09.png "Parameter connections") Open the panel of the module `SubImage`. Select the Mode *World Start & End (Image Axis Aligned)*. Toggle the field Auto apply *on*. @@ -79,15 +78,15 @@ Open the panel of the module `SubImage`. Select the Mode *World S Make sure to also check Auto-correct for negative subimage extents, so that you can draw rectangles from left to right and from right to left. {{}} -![World Coordinates](images/tutorials/visualization/V2_10.png "World Coordinates") +![World coordinates](images/tutorials/visualization/V2_10.png "World coordinates") Now, create parameter connections from the fields X, Y, Z of the module `StartWorldPos_Rectangle` to the field Start X, Start Y, Start Z in the panel of the module `SubImage`. Similarly, connect the parameter fields X, Y, Z of the module `EndWorldPos_Rectangle` to the field End X, End Y, End Z in the panel of the module `SubImage`. -![Another Parameter Connection](images/tutorials/visualization/V2_11.png "Another Parameter Connection") +![Another parameter connection](images/tutorials/visualization/V2_11.png "Another parameter connection") With this, you finished your magnifier. Open the viewer and draw a rectangle on one slice to see the result. -![Final Magnifier with SubImage](images/tutorials/visualization/V2_12.png "Final Magnifier with SubImage") +![Final magnifier with SubImage](images/tutorials/visualization/V2_12.png "Final magnifier with SubImage") ## Exercises Invert the image inside your magnified `SubImage` without changing the original image. You can use `Arithmetic*` modules for inverting. @@ -95,7 +94,7 @@ Invert the image inside your magnified `SubImage` without changing the original ## Summary * The module `OrthoView2D` provides coronal, axial, and sagittal views of an image. * The `SubImage` module allows to define a region of an input image to be treated as a separate image. -* Single x, y, and z coordinates can be transferred to a 3-dimensional vector and vice versa by using `ComposeVector3` and `DecomposeVector3`. +* Single x, y, and z coordinates can be transferred to a three-dimensional vector and vice versa by using `ComposeVector3` and `DecomposeVector3`. * Some modules provide hidden inputs and outputs that can be shown via {{< keyboard "SPACE" >}}. {{< networkfile "examples/visualization/example2/VisualizationExample2.mlab" >}} diff --git a/mevislab.github.io/content/tutorials/visualization/visualizationexample3.md b/mevislab.github.io/content/tutorials/visualization/visualizationexample3.md index 2dd12c3d3..328e74f2a 100644 --- a/mevislab.github.io/content/tutorials/visualization/visualizationexample3.md +++ b/mevislab.github.io/content/tutorials/visualization/visualizationexample3.md @@ -23,7 +23,7 @@ In this example we will show you how to blend a 2D image over another one. With ## Steps to Do ### Develop Your Network -Start this example by adding the shown modules, connecting the modules to form a network and loading the example image *Bone.tiff*. +Start this example by adding the shown modules, connecting the modules to form a network, and loading the example image *Bone.tiff*. Open the panel of the module `Threshold` and configure the module as shown below. @@ -37,9 +37,9 @@ The `Threshold` module is explained {{< docuLinks "/Standard/Documentation/Publi The module `Threshold` compares the value of each voxel of the image with a customizable threshold. In this case: If the value of the chosen voxel is lower than the threshold, the voxel value is replaced by the minimum value of the image. If the value of the chosen voxel is higher than the threshold, the voxel value is replaced by the maximum value of the image. With this, we can construct a binary image that divides the image into bone (white) and no bone (black). -Select output of the `Threshold` module to see the binary image in Output Inspector. +Select output of the `Threshold` module to see the binary image in OutputInspector. -![Image Threshold](images/tutorials/visualization/V3_01.png "Image Threshold") +![Image threshold](images/tutorials/visualization/V3_01.png "Image threshold") ### Overlays The module `SoView2DOverlay` blends a 2D image over another one in a 2D viewer. In this case, all voxels with a value above the `Threshold` are colored and therefore highlighted. The colored voxels are then blended over the original image. Using the panel of `SoView2DOverlay`, you can select the color of the overlay. @@ -64,7 +64,7 @@ The `SoView2DOverlay` module is explained {{< docuLinks "/Standard/Documentation * You can also use a 3D `SoRenderArea` for the same visualizations. An example can be seen in the next [Example 4](tutorials/visualization/visualizationexample4 "Display images converted to Open Inventor scene objects"). {{}} -The `SoView2DOverlay` module is not intended to work with `OrthoView2D`; in this case, use a `GVROrthoOverlay`. +The `SoView2DOverlay` module is not intended to work with `OrthoView2D`; in this case, use a `GVROrthoOverlay` or `SoView2DOverlayMPR`. {{}} {{< networkfile "examples/visualization/example3/VisualizationExample3.mlab" >}} diff --git a/mevislab.github.io/content/tutorials/visualization/visualizationexample4.md b/mevislab.github.io/content/tutorials/visualization/visualizationexample4.md index 19c5571df..a913e96dc 100644 --- a/mevislab.github.io/content/tutorials/visualization/visualizationexample4.md +++ b/mevislab.github.io/content/tutorials/visualization/visualizationexample4.md @@ -18,11 +18,11 @@ menu: {{< youtube "WaD6zuvVNek" >}} ## Introduction -In the previous example you learned how to use the module `SoView2DOverlay` together with a `View2D`. MeVisLab provides a whole family of `SoView2D*` modules (`SoView2DOverlay`, `SoView2DRectangle`, `SoView2DGrid`, ...). These modules are derived from `SoView2DExtension`, which extends the `SoView2D` with specialized interaction and rendering. `SoView2D` itself renders a slice or a slab of a voxel image as a 2D image on the screen. +In the previous example you learned how to use the module `SoView2DOverlay` together with a `View2D`. MeVisLab provides a whole family of *SoView2D* modules (`SoView2DOverlay`, `SoView2DRectangle`, `SoView2DGrid`, ...). These modules are derived from `SoView2DExtension`, which extends the `SoView2D` with specialized interaction and rendering. `SoView2D` itself renders a slice or a slab of a voxel image as a 2D image on the screen. {{}} -More information about the SoView2D family can be found {{< docuLinks "/Resources/Documentation/Publish/SDK/ToolBoxReference/SoView2DDocPage.html" "here" >}} and in the {{< docuLinks "/Resources/Documentation/Publish/SDK/ToolBoxReference/classSoView2D.html" "SoView2D Reference" >}}. +More information about the *SoView2D* family can be found {{< docuLinks "/Resources/Documentation/Publish/SDK/ToolBoxReference/SoView2DDocPage.html" "here" >}} and in the {{< docuLinks "/Resources/Documentation/Publish/SDK/ToolBoxReference/classSoView2D.html" "SoView2D Reference" >}}. {{}} @@ -31,7 +31,7 @@ More information about the SoView2D family can be found {{< docuLinks "/Resource ## Steps to Do ### Develop Your Network -We will start the example by creating an overlay again. Add the following modules and connect them as shown. Select a *Threshold* and a *Comparison Operator* for the module `Threshold` as in the previous example. The module `SoView2D` converts the image into a scene object. The image as well as the overlay is rendered and displayed by the module `SoRenderArea`. +We will start the example by creating an overlay again. Add the following modules and connect them as shown. Select a Threshold and a Comparison operator for the module `Threshold` as in the previous example. The module `SoView2D` converts the image slice(s) into a texture. The image as well as the overlay is rendered and displayed by the module `SoRenderArea`. ![SoRenderArea](images/tutorials/visualization/V4_01.png "SoRenderArea") @@ -48,13 +48,13 @@ With the help of the module `SoRenderArea` you can record screenshots and movies ### Create Screenshots and Movies If you now select your favorite slice of the bone in the viewer `SoRenderArea` and press {{< keyboard "F11" >}}, a screenshot is taken and displayed in the Screenshot Gallery. For recording a movie, press {{< keyboard "F9" >}} to start the movie and {{< keyboard "F10" >}} to stop recording. You can find the movie in the Screenshot Gallery. -![Record Movies and Snapshots](images/tutorials/visualization/V4_05.png "Record Movies and Snapshots") +![Record movies and snapshots](images/tutorials/visualization/V4_05.png "Record movies and snapshots") ## Exercises 1. Create movies of a 3D scene. ## Summary -* Modules of the `SoView2D` family create or interact with scene objects and are based on the module `SoView2D`, which can convert a voxel image into a scene object. +* Modules of the *SoView2D* family create or interact with scene objects and are based on the module `SoView2D`, which can convert voxel image slice(s) into a texture. * The `SoRenderArea` module provides functionalities for screenshots and movie generation. {{< networkfile "examples/visualization/example4/VisualizationExample4.mlab" >}} diff --git a/mevislab.github.io/content/tutorials/visualization/visualizationexample5.md b/mevislab.github.io/content/tutorials/visualization/visualizationexample5.md index 507d0dfe0..566f9c866 100644 --- a/mevislab.github.io/content/tutorials/visualization/visualizationexample5.md +++ b/mevislab.github.io/content/tutorials/visualization/visualizationexample5.md @@ -30,7 +30,7 @@ Implement the following network and open the image *$(DemoDataPath)/BrainMultiMo The module `SoGVRVolumeRenderer` allows volume rendering of 3D and 4D images. {{}} -Additional information about Volume Rendering can be found here: {{< docuLinks "/Standard/Documentation/Publish/Overviews/GVROverview.html#top" "Giga Voxel Renderer">}} +Additional information about volume rendering can be found here: {{< docuLinks "/Standard/Documentation/Publish/Overviews/GVROverview.html#top" "Giga Voxel Renderer">}} {{}} [//]: <> (MVL-653) @@ -40,11 +40,11 @@ We like to add a surface color to the head. In order to do that, we add the modu ![SoLUTEditor](images/tutorials/visualization/V6_02.png "SoLUTEditor") -To change the color, open the panel of `SoLUTEditor`. In this editor we can change color and transparency interactively (for more information, see the {{< docuLinks "/Standard/Documentation/Publish/ModuleReference/SoLUTEditor.html" "help page">}}). Here, we have a range from black to white and from complete transparency to full opacity. +To change the color, open the panel of `SoLUTEditor`. In this editor, we can change color and transparency interactively (for more information, see the {{< docuLinks "/Standard/Documentation/Publish/ModuleReference/SoLUTEditor.html" "help page">}}). Here, we have a range from black to white and from complete transparency to full opacity. ![SoLUTEditor change colors](images/tutorials/visualization/V6_03.png "SoLUTEditor change colors") -We now like to add color. New color points can be added by clicking on the color bar at the bottom side of the graph and existing points can be moved by dragging. You can change the color of each point under *Color*. +We now like to add color. New color points can be added by clicking {{< mousebutton "left" >}} on the color bar at the bottom side of the graph and existing points can be moved by dragging. You can change the color of each point with the field Color. ![SoLUTEditor add colors](images/tutorials/visualization/V6_04.png "SoLUTEditor add colors") @@ -53,9 +53,9 @@ As a next step, we add some dynamics to the 3D scene: We like to rotate the head ![SoRotationXYZ](images/tutorials/visualization/V6_05.png "SoRotationXYZ") -Open the panels of both modules and select the axis the image should rotate around. In this case, the z-axis was selected. Now, build a parameter connection from the parameter *Time* out of the module `SoElapsedTime` to the parameter *Angle* of the module `SoRotationXYZ`. The angle changes with time and the head starts turning. +Open the panels of both modules and select the axis the image should rotate around. In this case, the z-axis was selected. Now, build a parameter connection from the parameter Time out of the module `SoElapsedTime` to the parameter Angle of the module `SoRotationXYZ`. The angle changes with time and the head starts turning. -![Time and Angle](images/tutorials/visualization/V6_06.png "Time and Angle") +![Time and angle](images/tutorials/visualization/V6_06.png "Time and angle") ## Exercises 1. Change the rotation speed. @@ -63,7 +63,7 @@ Open the panels of both modules and select the axis the image should rotate arou 3. Pause the rotation on pressing {{< keyboard "SPACE" >}}. ## Summary -* The module `SoGVRVolumeRenderer` renders paged images like DICOM files in a GVR. +* The module `SoGVRVolumeRenderer` renders paged images like DICOM files with a gigavoxel renderer (GVR). * Lookup tables (LUT) allow you to modify the color of your renderings. {{< networkfile "examples/visualization/example5/VisualizationExample5.mlab" >}} diff --git a/mevislab.github.io/content/tutorials/visualization/visualizationexample6.md b/mevislab.github.io/content/tutorials/visualization/visualizationexample6.md index ac212a979..c5640ef27 100644 --- a/mevislab.github.io/content/tutorials/visualization/visualizationexample6.md +++ b/mevislab.github.io/content/tutorials/visualization/visualizationexample6.md @@ -22,7 +22,9 @@ menu: ## Introduction -The MeVis Path Tracer offers a Monte Carlo Path Tracing framework running on CUDA GPUs. It offers photorealistic rendering of volumes and meshes, physically based lightning with area lights and soft shadows and fully integrates into MeVisLab Open Inventor (camera, depth buffer, clipping planes, etc.). +The **MeVis Path Tracer** offers a Monte Carlo Path Tracing framework running on CUDA GPUs. It offers photorealistic rendering of volumes and meshes, physically based lighting with area lights and soft shadows and fully integrates into MeVisLab Open Inventor (camera, depth buffer, clipping planes, etc.). + + {{}} CUDA is a parallel computing platform and programming model created by NVIDIA. For further information, see [NVIDIA website](https://blogs.nvidia.com/blog/2012/09/10/what-is-cuda-2/). @@ -30,7 +32,7 @@ CUDA is a parallel computing platform and programming model created by NVIDIA. F {{< imagegallery 5 "images/tutorials/visualization/pathtracer" "PathTracer1" "PathTracer2" "PathTracer3" "PathTracer4" "PathTracer5" >}} -The `SoPathTracer` module implements the main renderer (like the `SoGVRVolumeRenderer`). It collects all `SoPathTracer*` extensions (on its left side) in the scene and renders them. Picking is also supported, but it supports currently only the first hit position instead of a full hit profile. It supports an arbitrary number of objects with different orientation and bounding boxes. +The `SoPathTracer` module implements the main renderer (like the `SoGVRVolumeRenderer`). It collects all `SoPathTracer*` extensions (on its left side) in the scene and renders them. Picking is also supported, but it supports only the first hit position instead of a full hit profile. It supports an arbitrary number of objects with different orientation and bounding boxes. ## Path Tracing Path Tracing allows interactive, photorealistic 3D environments with dynamic light and shadow, reflections, and refractions. @@ -42,7 +44,7 @@ Monte Carlo path tracing is a technique used to simulate the behavior of light i [Ray tracing](https://en.wikipedia.org/wiki/Ray_tracing_(graphics)) is a technique for modelling light transport. It follows all light rays throughout the entire scene. Depending on the scene, this takes a lot of time to fully compute the resulting pixels. In contrast to ray tracing, path tracing only traces the most likely path of the light by using the [Monte Carlo method](https://en.wikipedia.org/wiki/Monte_Carlo_method). Computation is much faster but the results are comparable. {{}} -For more information about Path Tracing, see the [NVIDIA website](https://blogs.nvidia.com/blog/2022/03/23/what-is-path-tracing/). +For more information about path tracing, see the [NVIDIA website](https://blogs.nvidia.com/blog/2022/03/23/what-is-path-tracing/). {{}} ## Modules @@ -69,7 +71,7 @@ There are various extensions that can be used. * Allows to load a 8-bit tag volume * The tags are used to select a per-object LUT and/or material * A 2D LUT can be provided using `LUTConcat` or `SoLUTEditor2D` - * Per-tag materials can be provided by adding multiple materials to the *inMaterial* scene + * Per-tag materials can be provided by adding multiple materials to the inMaterial* input * Useful to render segmented objects * [SoPathTracerVolumeInstance](https://mevislabdownloads.mevis.de/docs/current/MeVisLab/Standard/Documentation/Publish/ModuleReference/SoPathTracerVolumeInstance.html#SoPathTracerVolumeInstance) can be used to render a [SoPathTracerVolume](https://mevislabdownloads.mevis.de/docs/current/MeVisLab/Standard/Documentation/Publish/ModuleReference/SoPathTracerVolume.html#SoPathTracerVolume) with differnt transformation, subvolume, LUT, material, ... * Allows to instantiate an existing volume @@ -84,7 +86,7 @@ There are various extensions that can be used. * Allows to render a cut slice through a volume * Allows to set an arbitrary plane and works on volumes and instances * Has its own LUT and can be opaque or transparent -* [SoPathTracerIsoSurface](https://mevislabdownloads.mevis.de/docs/current/MeVisLab/Standard/Documentation/Publish/ModuleReference/SoPathTracerIsoSurface.html#SoPathTracerIsoSurface) renders an iso surface (with first hit refinement) on the given base volume. +* [SoPathTracerIsoSurface](https://mevislabdownloads.mevis.de/docs/current/MeVisLab/Standard/Documentation/Publish/ModuleReference/SoPathTracerIsoSurface.html#SoPathTracerIsoSurface) renders an isosurface (with first hit refinement) on the given base volume. * Allows to render an isosurface of a volume * Works on volumes and instances * Supports opaque and transparent surfaces @@ -93,7 +95,7 @@ There are various extensions that can be used. * Arbitrary material can be specified ### Geometry -* [SoPathTracerMesh](https://mevislabdownloads.mevis.de/docs/current/MeVisLab/Standard/Documentation/Publish/ModuleReference/SoPathTracerMesh.html#SoPathTracerMesh) scans the input scene for triangle meshes and ray traces them. +* [SoPathTracerMesh](https://mevislabdownloads.mevis.de/docs/current/MeVisLab/Standard/Documentation/Publish/ModuleReference/SoPathTracerMesh.html#SoPathTracerMesh) scans the input scene for triangle meshes and renders them using ray tracing. * Allows to render arbitrary triangle meshes * Scans the input scene for triangle meshes and converts them to a bounding volume hierarchy (BVH) * Supports different materials by adding `SoPathTracerMaterials` diff --git a/mevislab.github.io/content/tutorials/visualization/visualizationexample7.md b/mevislab.github.io/content/tutorials/visualization/visualizationexample7.md index 1d3282a1a..b6e1b5ba0 100644 --- a/mevislab.github.io/content/tutorials/visualization/visualizationexample7.md +++ b/mevislab.github.io/content/tutorials/visualization/visualizationexample7.md @@ -29,7 +29,7 @@ Add the modules `LocalImage` and `OrthoView2D` to your workspace and connect the The `OrthoView2D` module allows you to select multiple layouts. Select layout *Cube Equal*. The layout shows your image in three orthogonal viewing directions. The top left segment remains empty. -![OrthoView2D Layouts](images/tutorials/image_processing/network_example7_2.png "OrthoView2D Layouts") +![OrthoView2D layouts](images/tutorials/image_processing/network_example7_2.png "OrthoView2D layouts") We now want to use a 3D rendering in the top left segment whenever the layout *Cube Equal* is chosen. Add a `View3D` and a `SoViewportRegion` module to your workspace. Connect the `LocalImage` with your `View3D`. The image is rendered in 3D. Hit {{< keyboard "SPACE" >}} on your keyboard to make the hidden output of the `View3D` module visible. Connect it with your `SoViewportRegion` and connect the `SoViewportRegion` with the inInvPreLUT input of the `OrthoView2D`. @@ -51,13 +51,13 @@ Add a `SoCameraInteraction` module between the `View3D` and the `SoViewportRegio You have now successfully added the `View3D` to the `OrthoView2D`, but there is still a problem remaining: If you change the layout to something different than *LAYOUT_CUBE_EQUAL*, the 3D content remains visible. -We can use a `StringUtils` module to resolve that. Set Operation to *Compare* and draw a parameter connection from the field OrthoView2D.layout to the field StringUtils.string1. The currently selected layout is displayed as String A. Enter *LAYOUT_CUBE_EQUAL* as String B. Now, draw a parameter connection from the field StringUtils.boolResult to the field SoViewportRegion.on. +We can use a `StringUtils` module to resolve that. Set Operation to *Compare* and establish a parameter connection from the field OrthoView2D.layout to the field StringUtils.string1. The currently selected layout is displayed as String A. Enter *LAYOUT_CUBE_EQUAL* as String B. Now, draw a parameter connection from the field StringUtils.boolResult to the field SoViewportRegion.on. ![StringUtils](images/tutorials/image_processing/network_example7_7.png "StringUtils") If the selected layout in `OrthoView2D` now matches the string *LAYOUT_CUBE_EQUAL* (the field boolResult of the `StringUtils` module is *TRUE*), the `SoViewportRegion` is turned *on*. In any other case, the 3D segment is not visible. -![Final Network](images/tutorials/image_processing/network_example7_8.png "Final Network") +![Final network](images/tutorials/image_processing/network_example7_8.png "Final network") ## Summary * The module `SoViewportRegion` renders a subgraph into a specified viewport region (VPR). diff --git a/mevislab.github.io/content/tutorials/visualization/visualizationexample8.md b/mevislab.github.io/content/tutorials/visualization/visualizationexample8.md index 4ead740ea..064dafe91 100644 --- a/mevislab.github.io/content/tutorials/visualization/visualizationexample8.md +++ b/mevislab.github.io/content/tutorials/visualization/visualizationexample8.md @@ -23,7 +23,7 @@ In this tutorial, we are using an input mask to create a vessel centerline using ## Steps to Do ### Develop Your Network -Load the example [tree mask](examples/visualization/example8/EditedImage.mlimage) by using the `LocalImage` module. Connect the output to a `DtfSkeletonization` module as seen below. The initial output of the `DtfSkeletonization` module is empty. Press the *Update* button to calculate the skeleton and the erosion distances. +Load the example [tree mask](examples/visualization/example8/EditedImage.mlimage) by using the `LocalImage` module. Connect the output to a `DtfSkeletonization` module as seen below. The initial output of the `DtfSkeletonization` module is empty. Press the Update button to calculate the skeleton and the erosion distances. ![Network](images/tutorials/visualization/V8_1.png "Network") @@ -31,11 +31,11 @@ Below you can see the output of the original image taken from the `LocalImage` m ![Output comparison](images/tutorials/visualization/V8_1b.png "Output comparison") -The output *DtfSkeletonization.outBase1* shows nothing. Here you can find the three-dimensional graph of the vascular structures. To generate it, open the panel of the `DtfSkeletonization` module, set *Update Mode* to *Auto Update*, and select *Update skeleton graph*. Now the output additionally provides a 3D graph. Additionally, enable the *Compile Graph Voxels* to provide all object voxels at the output. +The output *DtfSkeletonization.outBase1* shows nothing. Here you can find the three-dimensional graph of the vascular structures. To generate it, open the panel of the `DtfSkeletonization` module, set Update Mode to *Auto Update*, and check Update skeleton graph. Now, the output additionally provides a 3D graph. Additionally, enable Compile graph voxels to provide all object voxels at the output. ![DtfSkeletonization](images/tutorials/visualization/V8_02.png "DtfSkeletonization") -You can use the *Output Inspector* to see the 3D graph. +You can use the Output Inspector to see the 3D graph. ![Graph output of DtfSkeletonization](images/tutorials/visualization/V8_MLImage.png "Graph output of DtfSkeletonization") @@ -53,13 +53,13 @@ Use the `SoLUTEditor` for the `View2D`, too. ![Output Inspector](images/tutorials/visualization/V8_04_OutputInspector.png "Output Inspector") -Open the Panel of the `SoLUTEditor` and select tab *Range*. Define *New Range Min* as *-1* and *New Range Max* as *0*. +Open the Panel of the `SoLUTEditor` and select tab *Range*. Define New Range Min as *-1* and New Range Max as *0*. -![SoLUTEditor Range](images/tutorials/visualization/V8_04_Range.png "SoLUTEditor Range") +![SoLUTEditor range](images/tutorials/visualization/V8_04_Range.png "SoLUTEditor range") Change to *Editor* tab and define the following LUT: -![SoLUTEditor Editor](images/tutorials/visualization/V8_04_Editor.png "SoLUTEditor Editor") +![SoLUTEditor editor](images/tutorials/visualization/V8_04_Editor.png "SoLUTEditor editor") The viewers now show your vessel graph. @@ -83,17 +83,17 @@ ctx.field("GraphToVolume.update").touch() ``` {{}} -First, we always want a fresh skeleton. We touch the *update* trigger of the module `DtfSkeletonization`. Then, we get the graph from the *DtfSkeletonization.outBase1* output. If a valid graph is available, we walk through all edges of the graph and print the ID of each edge. In the end, we update the GraphToVolume module to get the calculated values of the Python script in the viewers. Click *Execute*. +First, we always want a fresh skeleton. We touch the *update* trigger of the module `DtfSkeletonization`. Then, we get the graph from the *DtfSkeletonization.outBase1* output. If a valid graph is available, we walk through all edges of the graph and print the ID of each edge. In the end, we update the GraphToVolume module to get the calculated values of the Python script in the viewers. Click Execute. The Debug Output of the MeVisLab IDE shows a numbered list of edge IDs from 1 to 153. ![RunPythonScript](images/tutorials/visualization/V8_05.png "RunPythonScript") -We now want the edge ID to be used for coloring each of the skeletons differently. Open the Panel of the `SoLUTEditor` and select tab *Range*. Define *New Range Min* as *0* and *New Range Max* as *153*. Define different colors for your LUT. +We now want the edge ID to be used for coloring each of the skeletons differently. Open the Panel of the `SoLUTEditor` and select tab *Range*. Define New Range Min as *0* and New Range Max as *153*. Define different colors for your LUT. ![SoLUTEditor](images/tutorials/visualization/V8_05_LUT.png "SoLUTEditor") -The `SoGVRVolumeRenderer` module also needs a different setting. Open its panel in the *Main* tab, select *Illuminated* as the *Render Mode*. Adjust the *Quality* setting to *0.10*. On tab *Advanced*, set *Filter Volume Data* to *Nearest*. Change to the *Illumination* tab and define below parameters: +The `SoGVRVolumeRenderer` module also needs a different setting. Open its panel in the *Main* tab, select *Illuminated* as the Render Mode. Adjust the Quality setting to *0.10*. On tab *Advanced*, set Filter Volume Data to *Nearest*. Change to the *Illumination* tab and define below parameters: {{}} @@ -122,21 +122,21 @@ Your viewers now show a different color for each skeleton, based on our LUT. ![View2D and SoExaminerViewer](images/tutorials/visualization/V8_05_Viewer.png "View2D and SoExaminerViewer") ### Render the Vascular System Using SoVascularSystem -The `SoVascularSystem` module is optimized for rendering vascular structures. In comparison to the `SoGVRVolumeRenderer` module, it allows to render the surface, the skeleton or points of the structure in an open inventor scene graph. Interactions with edges of the graph are also already implemented. +The `SoVascularSystem` module is optimized for rendering vascular structures. In contrast to the `SoGVRVolumeRenderer` module, it allows to render the surface, the skeleton or points of the structure in an Open Inventor scene graph. Interactions with edges of the graph are also already implemented. Add a `SoVascularSystem` module to your workspace. Connect it to your `DtfSkeletonization` module and to the `SoLUTEditor` as seen below. Add another `SoExaminerViewer` for comparing the two visualization. The same `SoBackground` can be added to your new scene. -Uncheck *Use skeleton colors* and *Use integer LUT* on *Appearance* tab of the `SoVascularSystem` module panel. +Uncheck Use skeleton colors and Use integer LUT on the *Appearance* tab of the `SoVascularSystem` module panel. -![ EditedNetwork](images/tutorials/visualization/V8_SoVascularSystem.png " EditedNetwork") +![Edited network](images/tutorials/visualization/V8_SoVascularSystem.png "Edited network") {{}} More information about the `SoVascularSystem` module can be found in the {{< docuLinks "/Standard/Documentation/Publish/ModuleReference/SoVascularSystem.html" "help page" >}} of the module. {{}} -Draw parameter connections from one `SoExaminerViewer` to the other. Use the fields seen below to synchronize your camera interaction. +Establish parameter connections from one `SoExaminerViewer` to the other. Use the fields seen below to synchronize your camera interaction. -![ Camera positions](images/tutorials/visualization/V8_SyncFloat.png " Camera positions") +![Camera positions](images/tutorials/visualization/V8_SyncFloat.png "Camera positions") Connect the backwards direction of the two `SoExaminerViewer` by using multiple `SyncFloat` modules and two `SyncVector` modules for position and orientation fields. @@ -144,21 +144,21 @@ Connect the backwards direction of the two `SoExaminerViewer` by using multiple To establish connections between fields with the type *Float*, you can use the `SyncFloat` module. For fields containing vector, the appropriate connection can be achieved using the `SyncVector` module. {{}} -![ SyncFloat & SyncVector](images/tutorials/visualization/V8_SyncFloat_Network.png " SyncFloat & SyncVector") +![SyncFloat & SyncVector](images/tutorials/visualization/V8_SyncFloat_Network.png "SyncFloat & SyncVector") Camera interactions are now synchronized between both `SoExaminerViewer` modules. Now, you can notice the difference between the two modules. We use `SoVascularSystem` for a smoother visualization of the vascular structures by using the graph as reference. The `SoGVRVolumeRenderer` renders the volume from the `GraphToVolume` module, including the visible stairs from voxel representations in the volume. -![ SoVascularSystem & SoGVRVolumeRenderer](images/tutorials/visualization/V8_Difference1.png " SoVascularSystem & SoGVRVolumeRenderer") +![SoVascularSystem & SoGVRVolumeRenderer](images/tutorials/visualization/V8_Difference1.png "SoVascularSystem & SoGVRVolumeRenderer") The `SoVascularSystem` module has additional visualization examples unlike `SoGVRVolumeRenderer`. Open the panel of the `SoVascularSystem` module and select *Random Points* for Display Mode in the *Main* tab to see the difference. -![ Random Points](images/tutorials/visualization/V8_SoVasularSystem_DisplayMode1.png " Random Points") +![Random points](images/tutorials/visualization/V8_SoVasularSystem_DisplayMode1.png "Random points") Change it to *Skeleton* to only show the centerlines/skeletons of the vessels. -![ Skeleton](images/tutorials/visualization/V8_SoVasularSystem_DisplayMode2.png " Skeleton") +![Skeleton](images/tutorials/visualization/V8_SoVasularSystem_DisplayMode2.png "Skeleton") {{}} For volume calculations, use the original image mask instead of the result from `GraphToVolume`. @@ -199,16 +199,16 @@ Be aware that the MinDistance and MaxDistance< Instead of using the ID of each edge for the label property, we are now using the MinDistance property of the skeleton. The result is a color-coded 3D visualization depending on the radius of the vessels. Small vessels are red, large vessels are green. -![Radius based Visualization](images/tutorials/visualization/V8_010new.png "Radius based Visualization") +![Radius-based visualization](images/tutorials/visualization/V8_010new.png "Radius-based visualization") {{}} -If you have a NIfTI file, convert it into an ML image. Load your tree mask NIfTI file using the `itkImageFileReader` module. Connect the output to a `BoundingBox` module, which removes black pixels and creates a volume without unmasked parts. In the end, add a `MLImageFormatSave` module to save it as *.mlimage* file. They are much smaller than a NIfTI file. +If you have a NIfTI file, convert it into an ML image. Load your tree mask NIfTI file using the `itkImageFileReader` module. Connect the output to a `BoundingBox` module, which creates a subvolume with only the unmasked parts. In the end, add a `MLImageFormatSave` module to save it as *.mlimage* file. They are much smaller than a NIfTI file. -![NIFTI file conversion](images/tutorials/visualization/V8_ConvertToMlImage.png "NIFTI file conversion") +![NIfTI file conversion](images/tutorials/visualization/V8_ConvertToMlImage.png "NIfTI file conversion") {{}} ### Mouse Clicks on Vessel Graph -Open the *Interaction* tab of the `SoVascularSystem` module. In `SoExaminerViewer` module, change to *Pick Mode* and click into your vessel structure. The panel of the `SoVascularSystem` module shows all information about the hit of your click in the vessel tree. +Open the *Interaction* tab of the `SoVascularSystem` module. In `SoExaminerViewer` module, change to *Pick Mode* and click {{< mousebutton "left" >}} into your vessel structure. The panel of the `SoVascularSystem` module shows all information about the hit of your click in the vessel tree. ![Getting the click point in a vascular tree](images/tutorials/visualization/V8_Interactions.png "Getting the click point in a vascular tree") @@ -216,6 +216,6 @@ Open the *Interaction* tab of the `SoVascularSystem` module. In `SoExaminerViewe * Vessel centerlines can be created using a `DtfSkeletonization` module. * Vascular structures can be visualized using a `SoVascularSystem` module, which provides several vessel-specific display modes. * The `SoVascularSystem` module provides information about mouse clicks into a vascular tree. -* The labels of a skeleton can be used to store additional information for visualization. +* The labels of a Skeleton can be used to store additional information for visualization. {{< networkfile "examples/visualization/example8/VisualizationExample8.mlab" >}} diff --git a/mevislab.github.io/content/tutorials/visualization/visualizationexample9.md b/mevislab.github.io/content/tutorials/visualization/visualizationexample9.md index 01795f516..454600eff 100644 --- a/mevislab.github.io/content/tutorials/visualization/visualizationexample9.md +++ b/mevislab.github.io/content/tutorials/visualization/visualizationexample9.md @@ -18,7 +18,7 @@ menu: {{< youtube "Sxfwwm6BGnA" >}} ## Introduction -In this tutorial, we are using the `AnimationRecorder` module to generate dynamic and visually appealing animations of our 3D scenes. We will be recording a video of the results of our previous project, particularly the detailed visualizations of the muscles, bones, and blood vessels created using `PathTracer`. +In this tutorial, we are using the `AnimationRecorder` module to generate dynamic and visually appealing animations of our 3D scenes. We will record a video of the results of our previous project, particularly the detailed visualizations of the muscles, bones, and blood vessels created using `PathTracer`. ## Steps to Do Open the network and files of [Example 6.2](tutorials/visualization/pathtracer/pathtracerexample2/), add a `SoSeparator` module and an `AnimationRecorder` module to your workspace and connect them as shown below. @@ -27,13 +27,13 @@ The `SoSeparator` module collects all components of our scene and provides one o The `AnimationRecorder` module allows to create animations and record them as video streams. It provides an editor to create keyframes for animating field values. -![AnimationRecorder](images/tutorials/visualization//pathtracer/Example9_1.png " AnimationRecorder") +![Complex network for a visualization recorded with the AnimationRecorder](images/tutorials/visualization//pathtracer/Example9_1.png "Complex network for a visualization recorded with the AnimationRecorder") -Define the following LUTs in `SoLUTEditor` of the knee or load this [XML file](examples/visualization/example6/LUT_AnimationRecorder.xml) with `LUTLoad1` to use a predefined LUT. +Define the following LUTs in `SoLUTEditor` of a knee or load this [XML file](examples/visualization/example6/LUT_AnimationRecorder.xml) with `LUTLoad1` to use a predefined LUT. -![ SoLUTEditor](images/tutorials/visualization//pathtracer/V9_LUT.png " SoLUTEditor") +![SoLUTEditor is used to set colors for a realistic visualization of a knee](images/tutorials/visualization//pathtracer/V9_LUT.png "SoLUTEditor is used to set colors for a realistic visualization of a knee") -Open the `AnimationRecorder` module and click on *New* to initiate a new animation, selecting a filename for the recorded keyframes (*.mlmov*). +Open the `AnimationRecorder` module and click on New to initiate a new animation, selecting a filename for the recorded keyframes (*.mlmov*). At the bottom of the `AnimationRecorder` panel, you'll find the keyframe editor, which is initially enabled. It contains the camera track with a keyframe at position *0*. The keyframe editor at the bottom serves as a control hub for playback and recording. @@ -41,58 +41,58 @@ At the bottom of the `AnimationRecorder` panel, you'll find the keyframe editor, Close the `SoExaminerViewer` while using the `AnimationRecorder` to prevent duplicate renderings and to save resources. {{}} -![AnimationRecorder](images/tutorials/visualization//pathtracer/V9_AnimationRecorder.png " AnimationRecorder") +![AnimationRecorder without any keyframes](images/tutorials/visualization//pathtracer/V9_AnimationRecorder.png "AnimationRecorder without any keyframes") -Keyframes in the `AnimationRecorder` mark specific field values at defined timepoints. You can add keyframes on the timeline by double-clicking at the chosen timepoint or right-clicking and selecting *Insert Key Frame*. Between these keyframes, values of the field are interpolated (linear or spline) or not. Selecting a keyframe, a dialog *Edit Camera Key Frame* will open. +Keyframes in the `AnimationRecorder` mark specific field values at defined timepoints. You can add keyframes on the timeline by double-clicking {{< mousebutton "left" >}} at the chosen timepoint or right-clicking {{< mousebutton "right" >}} and selecting *Insert Key Frame*. Between these keyframes, values of the field are interpolated (linear or spline) or not. Selecting a keyframe, a dialog *Edit Camera Key Frame* will open. -When adding a keyframe at a specific timepoint, you can change the camera dynamically in the viewer. This involves actions such as rotating to left or right, zooming in and out, and changing the camera's location. Within the *Edit Camera Key Frame* dialog save each keyframe by clicking on the *Store Current Camera State* button. Preview the video to observe the camera's movement. +When adding a keyframe at a specific timepoint, you can change the camera dynamically in the viewer. This involves actions such as rotating to left or right, zooming in and out, and changing the camera's location. Within the *Edit Camera Key Frame* dialog, save each keyframe by clicking on the Store Current Camera State button. Preview the video to observe the camera's movement. -The video settings in the `AnimationRecorder` provide essential parameters for configuring the resulting animation. You can control the *Framerate*, determining the number of frames per second in the video stream. It's important to note that altering the framerate may lead to the removal of keyframes, impacting the animation's smoothness. +The video settings in the `AnimationRecorder` provide essential parameters for configuring the resulting animation. You can control the Framerate, determining the number of frames per second in the video stream. It's important to note that altering the framerate may lead to the removal of keyframes, impacting the animation's smoothness. -Additionally, the *Duration* of the animation, specified as *videoLength*, defines how long the animation lasts in seconds. The *Video Size* determines the resolution of the resulting video. +Additionally, the Duration of the animation, specified as *videoLength*, defines how long the animation lasts in seconds. The Video Size determines the resolution of the resulting video. Repeat this process for each timepoint where adjustments to the camera position are needed, thus creating a sequence of keyframes. Before proceeding further, use the playback options situated at the base of the keyframe editor. This allows for a quick preview of the initial camera sequence, ensuring the adjustments align seamlessly for a polished transition between keyframes. {{}} -Decrease the number of iterations in the SoPathTracer module for a quicker preview if you like. Make sure to increase again before recording the final video. +Decrease the number of iterations in the `SoPathTracer` module for a quicker preview if you like. Make sure to increase again before recording the final video. {{}} -![ AnimationRecorder](images/tutorials/visualization//pathtracer/V9_AnimationRecorder1.png " AnimationRecorder") +![AnimationRecorder with keyframes](images/tutorials/visualization//pathtracer/V9_AnimationRecorder1.png "AnimationRecorder with keyframes") ## Modulating Knee Visibility with LUTRescale in Animation We want to show and hide the single segmentations during camera movements. Add two `LUTRescale` modules to your workspace and connect them as illustrated down below. The rationale behind using `LUTRescale` is to control the transparency and by that the visibility of elements in the scene at different timepoints. -![ LUTRescale](images/tutorials/visualization//pathtracer/V9_3.png " LUTRescale") +![Network with LUTRescale modules](images/tutorials/visualization//pathtracer/V9_3.png "Network with LUTRescale modules") ## Animate Bones and Vessels -Now, let's shift our focus to highlighting bones and vessels within the animation. Right-click on the `LUTRescale` module, navigate to *Show Window*, and select *Automatic Panel*. This will bring up the control window for the `LUTRescale` module. Search for the field named targetMax. You can either drag and drop it directly from the *Automatic Panel*, or alternatively, locate the Max field in the *Output Index Range* box within the module panel and then drag and drop it onto the fields section in the `AnimationRecorder` module, specifically under the *Perspective Camera* field. +Now, let's shift our focus to highlighting bones and vessels within the animation. Right-click {{< mousebutton "right" >}} on the `LUTRescale` module, navigate to *Show Window* and select *Automatic Panel*. This will bring up the control window for the `LUTRescale` module. Search for the field named targetMax. You can either drag and drop it directly from the *Automatic Panel*, or alternatively, locate the Max field in the *Output Index Range* box within the module panel and then drag and drop it onto the fields section in the `AnimationRecorder` module, specifically under the *Perspective Camera* field. By linking the targetMax field of the `LUTRescale` module to the `AnimationRecorder`, you establish a connection that allows you to define different values of the field for specific timepoints. The values between these timepoints can be interpolated as described above. -![ LUTRescale & AnimationRecorder](images/tutorials/visualization//pathtracer/LUTRescale_AnimationRecorder2.png " LUTRescale & AnimationRecorder") +![AnimationRecorder with a timeline for LUTRescale.targetMax](images/tutorials/visualization//pathtracer/LUTRescale_AnimationRecorder2.png "AnimationRecorder with a timeline for LUTRescale.targetMax") -To initiate the animation sequence, start by adding a keyframe at position *0* for the targetMax field. Set the Target Max value in the *Edit Key Frame – [LUTRescale.targetMax]* window to *1*, and click on the *Store Current Field Value* button to save it. +To initiate the animation sequence, start by adding a keyframe at position *0* for the targetMax field. Set the Target Max value in the *Edit Key Frame – [LUTRescale.targetMax]* window to *1*, and click on the Store Current Field Value button to save it. Next, proceed to add keyframes at the same timepoints as the desired keyframes of the *Perspective Camera's* first sequence. For each selected keyframe, progressively set values for the Target Max field, gradually increasing to *10*. This ensures specific synchronization between the visibility adjustments controlled by the `LUTRescale` module and the camera movements in the animation, creating a seamless transition. This gradual shift visually reveals the bones and vessels while concealing the knee structures and muscles. -To seamlessly incorporate the new keyframe at the same timepoints as the *Perspective Camera*, you have two efficient options. Simply click on the keyframe of the first sequence, and the line will automatically appear in the middle of the keyframe. A double-click will effortlessly insert a keyframe at precisely the same position. If you prefer more accurate adjustments, you can also set your frame manually using the *Edit Key Frame - LUTRescale.targetMax* window. This flexibility allows for precise control over the animation timeline, ensuring keyframes align precisely with your intended moments. +To seamlessly incorporate the new keyframe at the same timepoints as the *Perspective Camera*, you have two efficient options. Simply click on the keyframe of the first sequence, and the line will automatically appear in the middle of the keyframe. A double-click {{< mousebutton "left" >}} will effortlessly insert a keyframe at precisely the same position. If you prefer more accurate adjustments, you can also set your frame manually using the *Edit Key Frame - LUTRescale.targetMax* window. This flexibility allows for precise control over the animation timeline, ensuring keyframes align precisely with your intended moments. -![ LUTRescale & AnimationRecorder](images/tutorials/visualization//pathtracer/V9_7.png " LUTRescale & AnimationRecorder") +![AnimationRecorder with keyframes for LUTRescale.targetMax](images/tutorials/visualization//pathtracer/V9_7.png "AnimationRecorder with keyframes for LUTRescale.targetMax") ## Showcasing Only Bones -To control the visibility of the vessels, right-click on the ` LUTRescale1` module connected to the vessels. Open the *Show Window* and select *Automatic Panel*. Drag and drop the targetMax field into the `AnimationRecorder` module's fields section. +To control the visibility of the vessels, right-click {{< mousebutton "right" >}} on the `LUTRescale1` module connected to the vessels. Open the *Show Window* and select *Automatic Panel*. Drag and drop the targetMax field into the `AnimationRecorder` module's fields section. -![ LUTRescale1 & AnimationRecorder](images/tutorials/visualization//pathtracer/V9_8.png " LUTRescale1 & AnimationRecorder") +![AnimationRecorder with timelines for both targetMax values](images/tutorials/visualization//pathtracer/V9_8.png "AnimationRecorder with timelines for both targetMax values") -Add keyframes for both the *Perspective Camera* and the targetMax in `LUTRescale1` at the same timepoints. Access the *Edit Camera Key Frame* window for the added keyframe in the Perspective Camera and save the *current camera state*. To exclusively highlight only bones, adjust the Target Max values from *1* to *10000* in *Edit Key Frame - LUTRescale1.targetMax*. +Add keyframes for both the *Perspective Camera* and the targetMax in `LUTRescale1` at the same timepoints. Access the *Edit Camera Key Frame* window for the added keyframe in the *Perspective Camera* and save the *current camera state*. To exclusively highlight only bones, adjust the Target Max values from *1* to *10000* in *Edit Key Frame - LUTRescale1.targetMax*. -![ LUTRescale1 & AnimationRecorder](images/tutorials/visualization//pathtracer/V9_9.png " LUTRescale1 & AnimationRecorder") +![AnimationRecorder with keyframes for both targetMax values](images/tutorials/visualization//pathtracer/V9_9.png "AnimationRecorder with keyframes for both targetMax values") To feature everything again at the end, copy the initial keyframe of each field and paste it at the end of the timeline. This ensures a comprehensive display of all elements in the closing frames of your animation. -![ Final Animation Sequence Key Frames](images/tutorials/visualization//pathtracer/V9_10.png " Final Animation Sequence Key Frames") +![Keyframes sequence of the final animation](images/tutorials/visualization//pathtracer/V9_10.png "Keyframes sequence of the final animation") Finally, use the playback and recording buttons at the bottom of the keyframe editor to preview and record your animation. diff --git a/mevislab.github.io/themes/MeVisLab/assets/sass/styles.scss b/mevislab.github.io/themes/MeVisLab/assets/sass/styles.scss index b870094a8..6cac85672 100644 --- a/mevislab.github.io/themes/MeVisLab/assets/sass/styles.scss +++ b/mevislab.github.io/themes/MeVisLab/assets/sass/styles.scss @@ -101,6 +101,13 @@ attribute { word-wrap: break-word; } +inlineCode { + font-family: monospace; + background-color: #eeeeee; + padding: 2px 4px; + border-radius: 3px; +} + kbd { display: inline-block; border: 1px solid #ccc; diff --git a/mevislab.github.io/themes/MeVisLab/layouts/partials/glossarycontents.html b/mevislab.github.io/themes/MeVisLab/layouts/partials/glossarycontents.html index 1c0cddd66..594d37f22 100644 --- a/mevislab.github.io/themes/MeVisLab/layouts/partials/glossarycontents.html +++ b/mevislab.github.io/themes/MeVisLab/layouts/partials/glossarycontents.html @@ -8,8 +8,8 @@

Abbreviations

- CSO - Contour Segmented Object + CSO + Contour Segmentation Object WEM