Cornell CIS Program of Computer Graphics

computer graphics bachelor thesis

  • Student Research Opportunities
  • Graduate Program
  • Educational Resources
  • Images & Animation
  • Community Resources
  • Publications
  • What is Computer Graphics?
  • History & Achievements

Theses and Dissertations

  • Jeffrey Blaine Budsberg. Pigmented colorants: Dependence on media and time. Master's thesis, Cornell University, Jan 2007.
  • Jeffrey Michael Wang. Animating the ivory-billed woodpecker. Master's thesis, Cornell University, Jan 2007.
  • Nasheet Zaman. A sketch-based interface for parametric character modeling. Master's thesis, Cornell University, Jan 2007.
  • Jeremiah Fairbank. View dependent perspective images. Master's thesis, Cornell University, August 2005.
  • Vikash R Goel. Analytical centerline extraction and surface fitting using CT scans for aortic aneurysm repair. Master's thesis, Cornell University, May 2005.
  • Adam Michael Kravetz. Polyhedral hull online compositing system: Texturing and reflections. Master's thesis, Cornell University, August 2005.
  • Hongsong Li. Theoretical Framework And Physical Measurements Of Surface and Subsurface Light Scattering From Material Surfaces . PhD thesis, Cornell University, May 2005.
  • Michael Donikian. Iterative adaptive sampling for accurate direct illumination. Master's thesis, Cornell University, August 2004.
  • Sebastian Pablo Fernandez. Interactive Direct Illumination in Complex Environments . PhD thesis, Cornell University, June 2004.
  • Henry H. Letteron. Polyhedral hull online compositing system: Reconstruction and shadowing. Master's thesis, Cornell University, August 2004.
  • John Crane Mollis. Real-time hardware based tone reproduction. Master's thesis, Cornell University, January 2004.
  • William Adams Stokes. Perceptual illumination components: A new approach to efficient, high-quality global illumination rendering. Master's thesis, Cornell University, August 2004.
  • Ryan McCloud Ismert. A physical sampling metric for image-based computer graphics. Master's thesis, Cornell University, January 2003.
  • Jeremy Adam Selan. Merging live video with synthetic imagery. Master's thesis, Cornell University, 2003.
  • Parag Prabhakar Tole. Two Algorithms for Progressive Computation of Accurate Global Illumination . PhD thesis, Cornell University, 2003.
  • Steven Berman. Hardware-accelerated sort-last parallel rendering for pc clusters. Master's thesis, Cornell University, 2002.
  • Randima Fernando. Adaptive techniques for hardware shadow generation. Master's thesis, Cornell University, 2002.
  • SuAnne Fu. The impossible vase: An exploration in perception. Master's thesis, Cornell University, 2002.
  • Richard Levy. A scalable visualization display wall presentation system for cluster-based computing. Master's thesis, Cornell University, 2002.
  • Fabio Pellacini. A Perceptually-Based Decision Theoretic Framework for Interactive Rendering . PhD thesis, Cornell University, 2002.
  • David Augustus Hart. Direct illumination with lazy visibility evaluation. Master's thesis, Cornell University, 2000.
  • Daniel Kartch. Efficient Rendering and Compression for Full-Parallax Computer-Generated Holographic Stereograms . PhD thesis, Cornell University, 2000.
  • Mahesh Ramasubramanian. A perceptually based physical error metric for realistic image synthesis. Master's thesis, Cornell University, 2000.
  • Corey Theresa Toler. A computer-based approach for teaching architectural drawing. Master's thesis, Cornell University, 2000.
  • Yang Li Hector Yee. Spatiotemporal sensitivity and visual attention for efficient rendering of dynamic environments. Master's thesis, Cornell University, 2000.
  • Daniel G. Gelb. Image-based rendering for non-diffuse scenes. Master's thesis, Cornell University, 1999.
  • Gordon Kindlmann. Semi-automatic generation of transfer functions for direct volume rendering. Master's thesis, Cornell University, 1999.
  • Andrew Kunz. Face vectors: An abstraction for data-driven 3-d facial animation. Master's thesis, Cornell University, 1999.
  • Eric Chih-Cheng Wong. Artistic rendering of portrait photographs. Master's thesis, Cornell University, 1999.
  • Gun Alppay. Fast display of directional global illumination solutions. Master's thesis, Cornell University, 1998.
  • Richard M. Coutts. Conceptual modeling and rendering techniques for architectural design. Master's thesis, Cornell University, 1998.
  • James A. Ferwerda. Visual Models for Realistic Image Synthesis . PhD thesis, Cornell University, 1998.
  • Michael J. Malone. Sketchpad+ conceptual geometric modeling through perspective sketching on a pen-based display. Master's thesis, Cornell University, 1998.
  • Stephen R. Marschner. Inverse Rendering for Computer Graphics . PhD thesis, Cornell University, 1998.
  • Liang Peng. The Color Histogram and its Applications in Digital Photography . PhD thesis, Cornell University, 1998.
  • Moreno A. Piccolotto. Sketchpad+ architectural modeling through perspective sketching on a pen-based display. Master's thesis, Cornell University, 1998.
  • Bruce J. Walter. Density Estimation Techniques For Global Illumination . PhD thesis, Cornell University, 1998.
  • Sing-Choong Foo. A gonioreflectometer for measuring the bidirectional reflectance of material for use in illumination computation. Master's thesis, Cornell University, 1996.
  • Gene Greger. The irradiance volume. Master's thesis, Cornell University, 1996.
  • Patrick Heynen. Issues in perceptual organization for realistic image synthesis. Master's thesis, Cornell University, 1996.
  • Jonathan Joseph. Direct volume rendering of irregularly sampled data using voronoi decomposition. Master's thesis, Cornell University, 1996.
  • Greg Reeves Spencer. Perceptual scaling functions for high dynamic range images. Master's thesis, Cornell University, 1996.
  • Bretton Wade. Kernel based density estimation for global illumination. Master's thesis, Cornell University, 1996.
  • David M. Zareski. Parallel decomposition of view-independent global illumination algorithms. Master's thesis, Cornell University, 1996.
  • Daniel Lischinski. Accurate and Reliable Algorithms for Global Illumination . PhD thesis, Cornell University, 1994.
  • Christopher R. Schoeneman. A software framework for user interface design. Master's thesis, Cornell University, 1994.
  • Erin Shaw. Hierarchical radiosity for dynamic environments. Master's thesis, Cornell University, 1994.
  • Brian Edward Smits. Efficient Hierarchical Radiosity in Complex Environments . PhD thesis, Cornell University, 1994.
  • Julie O'Brien Dorsey. Computer Graphics Techniques for Opera Lighting Design and Simulation . PhD thesis, Cornell University, 1993.
  • Xiao Dong He. Physically-Based Models for the Reflection, Transmission and Subsurface Scattering of Light by Smooth and Rough Surfaces, with Applications to Realistic Image Synthesis . PhD thesis, Cornell University, 1993.
  • Michael C. Monks. Facilitating design with parametric construction methods. Master's thesis, Cornell University, 1993.
  • Kevin L. Novins. Towards Accurate and Efficient Volume Rendering . PhD thesis, Cornell University, 1993.
  • Richard S. Pasetto. A biomechanical model of human skin using finite element analysis. Master's thesis, Cornell University, 1993.
  • Filippo Tampieri. Discontinuity Meshing for Radiosity Image Synthesis . PhD thesis, Cornell University, 1993.
  • David Baraff. Dynamic Simulation of Non-Penetrating Rigid Bodies . PhD thesis, Cornell University, 1992.
  • Kathy Kershaw Barshatzky. A generalized texture-mapping pipeline. Master's thesis, Cornell University, 1992.
  • Ricardo Pomeranz. Mathematical means of representing curves and surfaces of varying spatial frequencies. Master's thesis, Cornell University, 1992.
  • Peter W. Pruyn. An exploration of three dimensional computer graphics in cockpit avionics. Master's thesis, Cornell University, 1992.
  • Mark C. Reichert. A two-pass radiosity method driven by lights and viewers position. Master's thesis, Cornell University, 1992.
  • Stephen H. Westin. Predicting reflectance functions from complex surfaces. Master's thesis, Cornell University, 1992.
  • Harold R. Zatz. Galerkin radiosity: A higher order solution method for global illumination. Master's thesis, Cornell University, 1992.
  • Priamos N. Georgiades. Interactive methods for locally manipulating the intrinsic geometry of curved surfaces. Master's thesis, Cornell University, 1991.
  • Theodore H. Himlan. Spectroradiometric 2d imaging and physical property measurements for validating and improving global illumination simulations. Master's thesis, Cornell University, 1991.
  • Leonard R. Wanger. Perceiving spatial relationships in computer generated images. Master's thesis, Cornell University, 1991.
  • Paul H. Wanuga. Accelerated radiosity methods for rendering complex environments. Master's thesis, Cornell University, 1991.
  • Julie O'Brien Dorsey. Computer graphics for the design and visualization of opera lighting effect. Master's thesis, Cornell University, 1990.
  • David W. George. A radiosity redistribution algorithm for dynamic environments. Master's thesis, Cornell University, 1990.
  • Rodney J. Recker. Improved techniques for progressive refinement radiosity. Master's thesis, Cornell University, 1990.
  • Shenchang Eric Chen. A progressive radiosity method and its implementation in a distributed processing environment. Master's thesis, Cornell University, 1989.
  • Richard L. Eaton. Explicit geometric constraints. Master's thesis, Cornell University, 1989.
  • Stuart Feldman. An abstraction paradigm for modeling complex environments. Master's thesis, Cornell University, 1989.
  • Peter Kochevar. Computer Graphics on Massively Parallel Machines . PhD thesis, Cornell University, 1989.
  • Wayne Lytle. A modular testbed for realistic image synthesis. Master's thesis, Cornell University, 1989.
  • Adam C. Stettner. Computer graphics for acoustic simulation and visualization. Master's thesis, Cornell University, 1989.
  • Filippo Tampieri. Global illumination algorithms for parallel computer architectures. Master's thesis, Cornell University, 1989.
  • Paul M. Isaacs. Controlling computer generated motion with dynamics, kinematics, and behavior functions. Master's thesis, Cornell University, 1988.
  • Wei Lu. Curved object modeling and rendering. Master's thesis, Cornell University, 1988.
  • Holly Rushmeier. Realistic Image Synthesis for Scenes with Radiatively Participating Media . PhD thesis, Cornell University, 1988.
  • John R. Wallace. A two-pass solution to the rendering equation: A synthesis of ray tracing and radiosity methods. Master's thesis, Cornell University, 1988.
  • Daniel R. Baum. An efficient radiosity method for dynamic environments. Master's thesis, Cornell University, 1987.
  • James A. Ferwerda. A psychophysical approach to the aliasing problem in realistic image synthesis. Master's thesis, Cornell University, 1987.
  • Malcolm Panthaki. Color postprocessing for three-dimensional finite element mesh quality evaluation and evolving graphical workstations. Master's thesis, Cornell University, 1987.
  • David C. Salmon. Large Change-Of-Curvature Effects in Quadratic Finite Elements for CAD of Membrane Structures . PhD thesis, Cornell University, 1987.
  • Philip J. Brock. A unified interactive geometric modeling system for simulating highly complex environments. Master's thesis, Cornell University, 1986.
  • Lisa Maynes Desjarlais. A wave based reflection model for realistic image synthesis. Master's thesis, Cornell University, 1986.
  • Eric A. Haines. The light buffer: A ray tracer shadow testing accelerator. Master's thesis, Cornell University, 1986.
  • David S. Immel. A radiosity method for non-diffuse surfaces. Master's thesis, Cornell University, 1986.
  • Kevin J. Koestner. A wave based reflection model for realistic image synthesis. Master's thesis, Cornell University, 1986.
  • Gary W. Meyer. Color Calculations for and Perceptual Assessment of Computer Graphic Images . PhD thesis, Cornell University, 1986.
  • Alan J. Polinsky. A unified interactive geometric modeling system for simulating highly complex environments. Master's thesis, Cornell University, 1986.
  • Holly E. Rushmeier. Extending the radiosity method to transmitting and specularly reflecting surfaces. Master's thesis, Cornell University, 1986.
  • Rebecca Slivka. A motion control system for realistic dynamics. Master's thesis, Cornell University, 1986.
  • Dan V. Ambrosi. Quadric surface modeling for ray tracing. Master's thesis, Cornell University, 1985.
  • Bruce C. Bailey. Unification of color postprocessing techniques for three-dimensional computational mechanics. Master's thesis, Cornell University, 1985.
  • Michael F. Cohen. A radiosity method for the realistic image synthesis of complex diffuse environments. Master's thesis, Cornell University, 1985.
  • Cindy M. Goral. A model for the interaction of light between diffuse surfaces. Master's thesis, Cornell University, 1985.
  • Jerome F. Hajjar. General-purpose three-dimensional color postprocessing for engineering analysis. Master's thesis, Cornell University, 1985.
  • Thomas V. Mazzotta. Modeling with scripts: A procedural approach to the construction of geometric models using interactive computer graphic techniques. Master's thesis, Cornell University, 1985.
  • Donald Woodrow White. Material and geometric nonlinear analysis of local planar behavior in steel frames using interactive computer graphics. Master's thesis, Cornell University, 1985.
  • Richard J. Carey. Textures for realistic image synthesis. Master's thesis, Cornell University, 1984.
  • Tao-Yang Han. Adaptive Substructuring and Interactive Graphics for Three-Dimensional Elasto-Plastic Finite Element Analysis . PhD thesis, Cornell University, 1984.
  • Gary J. Hooper. A system for image synthesis. Master's thesis, Cornell University, 1984.
  • Renato Perucchio. An Integrated Boundary Element Analysis System with Interactive Computer Graphics for Three-Dimensional Linear-Elastic Fracture Mechanics . PhD thesis, Cornell University, 1984.
  • David C. Salmon. Improved computer-aided design of cable-reinforced membranes. Master's thesis, Cornell University, 1984.
  • Channing P. Verbeck. A comprehensive light source description for computer graphics. Master's thesis, Cornell University, 1984.
  • Hank Weghorst. An image synthesis system with emphasis on ray tracing techniques. Master's thesis, Cornell University, 1984.
  • Roy A. Hall. A methodology for realistic image synthesis. Master's thesis, Cornell University, 1983.
  • Harold Hedelman. A data flow approach to composition with procedural models. Master's thesis, Cornell University, 1983.
  • John D. Hollyday. Refined modeling and interactive display of finite element stresses for cable-reinforced membranes. Master's thesis, Cornell University, 1983.
  • Gary W. Meyer. Colorimetry and computer graphics. Master's thesis, Cornell University, 1983.
  • Marcelo Gattas. Large Displacement, Interactive-Adaptive Dynamic Analysis of Frames . PhD thesis, Cornell University, 1982.
  • Jon H. Pittman. An interactive graphics environment for architectural energy simulation. Master's thesis, Cornell University, 1982.
  • Kim L. Shelley. Path specification and the use of path coherence in the rendering of dynamic sequences. Master's thesis, Cornell University, 1982.
  • Bruce A. Wallace. Automated production techniques in cartoon animation. Master's thesis, Cornell University, 1982.
  • San-Cheng Chang. An Integrated Finite Element Nonlinear Shell Analysis System with Interactive Computer Graphics . PhD thesis, Cornell University, 1981.
  • Robert L. Cook. A reflection model for realistic image synthesis. Master's thesis, Cornell University, 1981.
  • Eliot A. Feibush. An interactive computer graphics geometric input and editing system for architectural design. Master's thesis, Cornell University, 1981.
  • Bruce K. Forbes. Methods for reducing computational requirements in the geometric modeling of planar surfaces and volumes. Master's thesis, Cornell University, 1981.
  • Tao-Yang Han. A general two-dimensional, interactive graphical finite/boundary element preprocessor for a virtual storage environment. Master's thesis, Cornell University, 1981.
  • Lynn E. Johnson. An Interactive Method for Development and Evaluation of Reservoir Operating Policies . PhD thesis, Cornell University, 1981.
  • Michael Schulman. The interactive display of parameters on two- and three-dimensional surfaces. Master's thesis, Cornell University, 1981.
  • Stuart Sechrest. A visible polygon reconstruction algorithm. Master's thesis, Cornell University, 1981.
  • Peter N. French. Water Quality Modeling Using Interactive Computer Graphics . PhD thesis, Cornell University, 1980.
  • John L. Gross. . Design for the Presentation of Progressive Collapse Using Interactive Computer Graphics . PhD thesis, Cornell University, 1980.
  • Robert B. Haber. Computer-Aided Design of Cable Reinforced Membrane Structures PhD thesis, Cornell University, 1980.
  • Michael Kaplan. Parallel processing techniques for hidden-surface removal. Master's thesis, Cornell University, 1980.
  • Wayne E. Robertz. A graphical input system for computer-aided architectural design. Master's thesis, Cornell University, 1980.
  • Harvey Allison. A three-dimensional graphic input method for architectural design. Master's thesis, Cornell University, 1979.
  • Brian A. Barsky. A method for describing curved surfaces by transforming between interpolatory and b-spline representations. Master's thesis, Cornell University, 1979.
  • George H. Joblove. Color space and computer graphics. Master's thesis, Cornell University, 1979.
  • Douglas S. Kay. Transparency, refraction and ray tracing for computer synthesized images. Master's thesis, Cornell University, 1979.
  • Thomas A. Mutryn. Nonlinear, inelastic building connections. Master's thesis, Cornell University, 1979.
  • Richard Rogers. A computer-aided method for shading device design and analysis. Master's thesis, Cornell University, 1979.
  • Marc E. Schiler. Computer simulation of foliage effects on building energy load calculations. Master's thesis, Cornell University, 1979.
  • Mark S. Shephard. Finite Element Grid Optimization with Interactive Computer Graphics . PhD thesis, Cornell University, 1979.
  • Marc S. Levoy. Computer-assisted cartoon animation. Master's thesis, Cornell University, 1978.
  • Kevin Weiler. Hidden surface removal using polygon area sorting. Master's thesis, Cornell University, 1978.
  • Peter Atherton. Polygon shadow generation with an application to solar rights. Master's thesis, Cornell University, 1977.
  • Robert W. Thornton. Interactive modeling in three dimensions through two-dimensional windows. Master's thesis, Cornell University, 1977.
  • Nicholas H. Weingarten. Computer graphics input methods for interactive design. Master's thesis, Cornell University, 1977.
  • Publications

Bachelor and Master Theses

We permanently offer proposals for bachelor and master thesis projects in all areas across our research activities (see our publication page) and related subjects which cover most topics in Computer Graphics. The thesis topics are usually specified in cooperation with one of our research assistants and/or Prof. Kobbelt taking into account the student's individual interests and his/her previous knowledge as well as the current research agenda of the Computer Graphics group (e.g. in terms of ongoing academic or industrial cooperations). In order to guarantee a successful completion of the thesis, we usually expect our student to have

  • taken the "Basic Techniques in Computer Graphics" lecture if you are a bachelor student
  • extensive knowledge in computer graphics if you are a master student
  • a good working knowledge of C++

or an equivalent qualification. After a one-month evaluation period you will submit a short research proposal which summarizes the general subject and detailed goals of the thesis. Based on this proposal the thesis will be registered officially. During the following six (four, for a bachelor thesis) months you will work on the various programming tasks, literature search, data acquisition and so on as required by your project. If necessary, you can use the special equipment available at the graphics lab, including a 3D scanner, stereo projection wall, a robot arm, a 3D printer, high quality video and still cameras and other devices. Of course, during your thesis project there will always be a research assistant available who supports you and supervises the progress of the project and who can be asked for help if difficulties arise. Finally the thesis is finished by writing a report, giving a concluding talk about the project and the results, and by providing an archive with full documentation of the programs and other resources that have been created during the project. Please contact us via [email protected] for more information.

Visual Computing

  • Navigationsmenü

Thesis Topics

Topics for master and bachelor theses.

If you are interested in a conducting a thesis project in visual computing, please contact any of the group members to discuss further details. We recommend to first take an advanced course (i.e., beyond the first/second year introductory courses) or seminar with us as a preparation, but this is not a strict requirement if we can find a common topic based on your prior knowledge.

Below, we are listing a few example topics. This list is not exhaustive but meant to give a rough impression of suitable topics. Of course, it would be great if you were interested in one of the specific project ideas listed.

Computer Graphics Topics

  • 3D self-localization and mapping with a dynamic 3D scanner. You are given a dynamic 3D point cloud scanner, such as a Microsoft "Kinect". The device provides a stream of 3D points that are sampled from the environment the scanner currently sees, and the scanner is moved by a human user through a scene (without the system knowing the motion path). The task is now to assemble this stream of 3D points into a consistent scene (mapping), putting each individual scan in the right place (self-localization). Challenges arise due to symmetric geometry (repeating objects) and noise/occlusion and other measurement artifacts. The topic is suitable for bachelor (basic pipeline) and master (advanced processing) projects.
  • Example-based generative data models (images + 3D scenes). This is a timely and very interesting topic. Can we learn how classes of 3D objects or 2D images are structured from examples? This means, we just show our algorithm a few examples of what we want (for example, 3D models of cars or castles, or paintings of people) and we want the computer to generalize and create similar content automatically. A variety of techniques exist - from non-parametric texture synthesis (which can be implemented in 10 lines of C++ code and can generalize from a single image) to adversarially trained deep networks (which needs only a few more lines of code in Phython+Tensor Flow, however, along with a million example images). Generally, the 3D case is (seems to be?) more challenging than the 2D case in terms of implementation effort. The topic is also suitable for bachelor and master thesis projects.

Computer Vision / Machine Learning Topics

  • 3D object classification in medical CT data. The task is to train a classifier for 3D volume data that recognizes features of medical and/or anatomical relevance in a 3D CT scan. For example, a simple task would be to localize specific bones or organs in a 3D scan. A more complex task would be to recognize medical conditions from example data. Recent advances in computer vision (in particular, representation learning methods such as deep convolutional neural networks) allow us to get quite impressive recognition performance in such tasks. This area could be explored in a bachelor thesis (basics) or a master thesis (more complex recognition tasks).
  • 3D object classification in point cloud scans. Similar idea as above, but using point cloud scans from 3D scanners as data source.
  • Improving deep learning methods. Can we use ideas from the computer graphics toolbox (structure models and data representations) in order to improve the learning efficiency of deep neural networks? This would be an advanced master thesis topic for students who are a bit theoretically inclined (not afraid of a bit of math).

Interdisciplinary Research

  • Medical data classification (as discussed above; collaboration with the medical school).
  • Pattern recognition in atmospheric simulations. The goal is to classify and find flow patterns in atmospheric simulation data. This could be done in a supervised (we have examples of what we are looking for) and unsupervised settings (we want to cluster repetitive structures). This project topic would be offered in collaboration with the Institute for Physics of the Atmosphere. The topic could be formed into a bachelor or master thesis.
  • Machine learning and deep networks for coarse-graining in multi-scale simulations. The topic says it all - can we learn how to conduct simulations on a very coarse (and easier to compute) level of detail such that the effects on the fine scale are predicted in a qualitatively correct way? There have been some recent, exciting ideas involving deep neural networks proposed in the literature that we could follow up on. This topic would be suitable as a master thesis project; a reasonably strong background in physics or mathematics would be highly recommended.

Further topics

Do you have an idea of your own that is related to visual computing? Or are you interested in a specific direction / topic area of that flavor? Do not hesitate to contact any member of our group for a discussion.

computer graphics bachelor thesis

  • Prospective Students
  • Current Students
  • Teaching Staff
  • Lifelong learning
  • All Degree Programs
  • ALMA Portal
  • Excellence Strategy
  • Staff Search (EPV)
  • Student Administration
  • University Library
  • Online Course Catalogue
  • Webmail Uni Tübingen
  • Advice for International Students

Bachelor and Master Theses

We are offering topics for Bachelor and Master thesis in different areas of computer graphics and computer vision. Please contact Prof. Dr.-Ing. H. Lensch via email .

Download LaTex templates:

  • BA - Template
  • MA - Template
  • PhD - Template

For a successful performance we offer a regular meeting to present intermediate results and discuss ideas. You can find the current schedule in ILIAS .

computer graphics bachelor thesis

A list of completed theses and new thesis topics from the Computer Vision Group.

Are you about to start a BSc or MSc thesis? Please read our instructions for preparing and delivering your work.

Below we list possible thesis topics for Bachelor and Master students in the areas of Computer Vision, Machine Learning, Deep Learning and Pattern Recognition. The project descriptions leave plenty of room for your own ideas. If you would like to discuss a topic in detail, please contact the supervisor listed below and Prof. Paolo Favaro to schedule a meeting. Note that for MSc students in Computer Science it is required that the official advisor is a professor in CS.

AI deconvolution of light microscopy images

Level: master.

Background Light microscopy became an indispensable tool in life sciences research. Deconvolution is an important image processing step in improving the quality of microscopy images for removing out-of-focus light, higher resolution, and beter signal to noise ratio. Currently classical deconvolution methods, such as regularisation or blind deconvolution, are implemented in numerous commercial software packages and widely used in research. Recently AI deconvolution algorithms have been introduced and being currently actively developed, as they showed a high application potential.

Aim Adaptation of available AI algorithms for deconvolution of microscopy images. Validation of these methods against state-of-the -art commercially available deconvolution software.

Material and Methods Student will implement and further develop available AI deconvolution methods and acquire test microscopy images of different modalities. Performance of developed AI algorithms will be validated against available commercial deconvolution software.

computer graphics bachelor thesis

  • Al algorithm development and implementation: 50%.
  • Data acquisition: 10%.
  • Comparison of performance: 40 %.

Requirements

  • Interest in imaging.
  • Solid knowledge of AI.
  • Good programming skills.

Supervisors Paolo Favaro, Guillaume Witz, Yury Belyaev.

Institutes Computer Vison Group, Digital Science Lab, Microscopy imaging Center.

Contact Yury Belyaev, Microscopy imaging Center, [email protected] , + 41 78 899 0110.

Instance segmentation of cryo-ET images

Level: bachelor/master.

In the 1600s, a pioneering Dutch scientist named Antonie van Leeuwenhoek embarked on a remarkable journey that would forever transform our understanding of the natural world. Armed with a simple yet ingenious invention, the light microscope, he delved into uncharted territory, peering through its lens to reveal the hidden wonders of microscopic structures. Fast forward to today, where cryo-electron tomography (cryo-ET) has emerged as a groundbreaking technique, allowing researchers to study proteins within their natural cellular environments. Proteins, functioning as vital nano-machines, play crucial roles in life and understanding their localization and interactions is key to both basic research and disease comprehension. However, cryo-ET images pose challenges due to inherent noise and a scarcity of annotated data for training deep learning models.

computer graphics bachelor thesis

Credit: S. Albert et al./PNAS (CC BY 4.0)

To address these challenges, this project aims to develop a self-supervised pipeline utilizing diffusion models for instance segmentation in cryo-ET images. By leveraging the power of diffusion models, which iteratively diffuse information to capture underlying patterns, the pipeline aims to refine and accurately segment cryo-ET images. Self-supervised learning, which relies on unlabeled data, reduces the dependence on extensive manual annotations. Successful implementation of this pipeline could revolutionize the field of structural biology, facilitating the analysis of protein distribution and organization within cellular contexts. Moreover, it has the potential to alleviate the limitations posed by limited annotated data, enabling more efficient extraction of valuable information from cryo-ET images and advancing biomedical applications by enhancing our understanding of protein behavior.

Methods The segmentation pipeline for cryo-electron tomography (cryo-ET) images consists of two stages: training a diffusion model for image generation and training an instance segmentation U-Net using synthetic and real segmentation masks.

    1. Diffusion Model Training:         a. Data Collection: Collect and curate cryo-ET image datasets from the EMPIAR             database (https://www.ebi.ac.uk/empiar/).         b. Architecture Design: Select an appropriate architecture for the diffusion model.         c. Model Evaluation: Cryo-ET experts will help assess image quality and fidelity             through visual inspection and quantitative measures     2. Building the Segmentation dataset:         a. Synthetic and real mask generation: Use the trained diffusion model to generate             synthetic cryo-ET images. The diffusion process will be seeded from either a real             or a synthetic segmentation mask. This will yield to pairs of cryo-ET images and             segmentation masks.     3. Instance Segmentation U-Net Training:         a. Architecture Design: Choose an appropriate instance segmentation U-Net             architecture.         b. Model Evaluation: Evaluate the trained U-Net using precision, recall, and F1             score metrics.

By combining the diffusion model for cryo-ET image generation and the instance segmentation U-Net, this pipeline provides an efficient and accurate approach to segment structures in cryo-ET images, facilitating further analysis and interpretation.

References     1. Kwon, Diana. "The secret lives of cells-as never seen before." Nature 598.7882 (2021):         558-560.     2. Moebel, Emmanuel, et al. "Deep learning improves macromolecule identification in 3D         cellular cryo-electron tomograms." Nature methods 18.11 (2021): 1386-1394.     3. Rice, Gavin, et al. "TomoTwin: generalized 3D localization of macromolecules in         cryo-electron tomograms with structural data mining." Nature Methods (2023): 1-10.

Contacts Prof. Thomas Lemmin Institute of Biochemistry and Molecular Medicine Bühlstrasse 28, 3012 Bern ( [email protected] )

Prof. Paolo Favaro Institute of Computer Science Neubrückstrasse 10 3012 Bern ( [email protected] )

Adding and removing multiple sclerosis lesions with to imaging with diffusion networks

Background multiple sclerosis lesions are the result of demyelination: they appear as dark spots on t1 weighted mri imaging and as bright spots on flair mri imaging.  image analysis for ms patients requires both the accurate detection of new and enhancing lesions, and the assessment of  atrophy via local thickness and/or volume changes in the cortex.  detection of new and growing lesions is possible using deep learning, but made difficult by the relative lack of training data: meanwhile cortical morphometry can be affected by the presence of lesions, meaning that removing lesions prior to morphometry may be more robust.  existing ‘lesion filling’ methods are rather crude, yielding unrealistic-appearing brains where the borders of the removed lesions are clearly visible., aim: denoising diffusion networks are the current gold standard in mri image generation [1]: we aim to leverage this technology to remove and add lesions to existing mri images.  this will allow us to create realistic synthetic mri images for training and validating ms lesion segmentation algorithms, and for investigating the sensitivity of morphometry software to the presence of ms lesions at a variety of lesion load levels., materials and methods: a large, annotated, heterogeneous dataset of mri data from ms patients, as well as images of healthy controls without white matter lesions, will be available for developing the method.  the student will work in a research group with a long track record in applying deep learning methods to neuroimaging data, as well as experience training denoising diffusion networks..

Nature of the Thesis:

Literature review: 10%

Replication of Blob Loss paper: 10%

Implementation of the sliding window metrics:10%

Training on MS lesion segmentation task: 30%

Extension to other datasets: 20%

Results analysis: 20%

Fig. Results of an existing lesion filling algorithm, showing inadequate performance

Requirements:

Interest/Experience with image processing

Python programming knowledge (Pytorch bonus)

Interest in neuroimaging

Supervisor(s):

PD. Dr. Richard McKinley

Institutes: Diagnostic and Interventional Neuroradiology

Center for Artificial Intelligence in Medicine (CAIM), University of Bern

References: [1] Brain Imaging Generation with Latent Diffusion Models , Pinaya et al, Accepted in the Deep Generative Models workshop @ MICCAI 2022 , https://arxiv.org/abs/2209.07162

Contact : PD Dr Richard McKinley, Support Centre for Advanced Neuroimaging ( [email protected] )

Improving metrics and loss functions for targets with imbalanced size: sliding window Dice coefficient and loss.

Background The Dice coefficient is the most commonly used metric for segmentation quality in medical imaging, and a differentiable version of the coefficient is often used as a loss function, in particular for small target classes such as multiple sclerosis lesions.  Dice coefficient has the benefit that it is applicable in instances where the target class is in the minority (for example, in case of segmenting small lesions).  However, if lesion sizes are mixed, the loss and metric is biased towards performance on large lesions, leading smaller lesions to be missed and harming overall lesion detection.  A recently proposed loss function (blob loss[1]) aims to combat this by treating each connected component of a lesion mask separately, and claims improvements over Dice loss on lesion detection scores in a variety of tasks.

Aim: The aim of this thesisis twofold.  First, to benchmark blob loss against a simple, potentially superior loss for instance detection: sliding window Dice loss, in which the Dice loss is calculated over a sliding window across the area/volume of the medical image.  Second, we will investigate whether a sliding window Dice coefficient is better corellated with lesion-wise detection metrics than Dice coefficient and may serve as an alternative metric capturing both global and instance-wise detection.

Materials and Methods: A large, annotated, heterogeneous dataset of MRI data from MS patients will be available for benchmarking the method, as well as our existing codebases for MS lesion segmentation.  Extension of the method to other diseases and datasets (such as covered in the blob loss paper) will make the method more plausible for publication.  The student will work alongside clinicians and engineers carrying out research in multiple sclerosis lesion segmentation, in particular in the context of our running project supported by the CAIM grant.

computer graphics bachelor thesis

Fig. An  annotated MS lesion case, showing the variety of lesion sizes

References: [1] blob loss: instance imbalance aware loss functions for semantic segmentation, Kofler et al, https://arxiv.org/abs/2205.08209

Idempotent and partial skull-stripping in multispectral MRI imaging

Background Skull stripping (or brain extraction) refers to the masking of non-brain tissue from structural MRI imaging.  Since 3D MRI sequences allow reconstruction of facial features, many data providers supply data only after skull-stripping, making this a vital tool in data sharing.  Furthermore, skull-stripping is an important pre-processing step in many neuroimaging pipelines, even in the deep-learning era: while many methods could now operate on data with skull present, they have been trained only on skull-stripped data and therefore produce spurious results on data with the skull present.

High-quality skull-stripping algorithms based on deep learning are now widely available: the most prominent example is HD-BET [1].  A major downside of HD-BET is its behaviour on datasets to which skull-stripping has already been applied: in this case the algorithm falsely identifies brain tissue as skull and masks it.  A skull-stripping algorithm F not exhibiting this behaviour would  be idempotent: F(F(x)) = F(x) for any image x.  Furthermore, legacy datasets from before the availability of high-quality skull-stripping algorithms may still contain images which have been inadequately skull-stripped: currently the only solution to improve the skull-stripping on this data is to go back to the original datasource or to manually correct the skull-stripping, which is time-consuming and prone to error. 

Aim: In this project, the student will develop an idempotent skull-stripping network which can also handle partially skull-stripped inputs.  In the best case, the network will operate well on a large subset of the data we work with (e.g. structural MRI, diffusion-weighted MRI, Perfusion-weighted MRI,  susceptibility-weighted MRI, at a variety of field strengths) to maximize the future applicability of the network across the teams in our group.

Materials and Methods: Multiple datasets, both publicly available and internal (encompassing thousands of 3D volumes) will be available. Silver standard reference data for standard sequences at 1.5T and 3T can be generated using existing tools such as HD-BET: for other sequences and field strengths semi-supervised learning or methods improving robustness to domain shift may be employed.  Robustness to partial skull-stripping may be induced by a combination of learning theory and model-based approaches.

computer graphics bachelor thesis

Dataset curation: 10%

Idempotent skull-stripping model building: 30%

Modelling of partial skull-stripping:10%

Extension of model to handle partial skull: 30%

Results analysis: 10%

Fig. An example of failed skull-stripping requiring manual correction

References: [1] Isensee, F, Schell, M, Pflueger, I, et al. Automated brain extraction of multisequence MRI using artificial neural networks. Hum Brain Mapp . 2019; 40: 4952– 4964. https://doi.org/10.1002/hbm.24750

Automated leaf detection and leaf area estimation (for Arabidopsis thaliana)

Correlating plant phenotypes such as leaf area or number of leaves to the genotype (i.e. changes in DNA) is a common goal for plant breeders and molecular biologists. Such data can not only help to understand fundamental processes in nature, but also can help to improve ecotypes, e.g., to perform better under climate change, or reduce fertiliser input. However, collecting data for many plants is very time consuming and automated data acquisition is necessary.

The project aims at building a machine learning model to automatically detect plants in top-view images (see examples below), segment their leaves (see Fig C) and to estimate the leaf area. This information will then be used to determine the leaf area of different Arabidopsis ecotypes. The project will be carried out in collaboration with researchers of the Institute of Plant Sciences at the University of Bern. It will also involve the design and creation of a dataset of plant top-views with the corresponding annotation (provided by experts at the Institute of Plant Sciences).

computer graphics bachelor thesis

Contact: Prof. Dr. Paolo Favaro ( [email protected] )

Master Projects at the ARTORG Center

The Gerontechnology and Rehabilitation group at the ARTORG Center for Biomedical Engineering is offering multiple MSc thesis projects to students, which are interested in working with real patient data, artificial intelligence and machine learning algorithms. The goal of these projects is to transfer the findings to the clinic in order to solve today’s healthcare problems and thus to improve the quality of life of patients. Assessment of Digital Biomarkers at Home by Radar.  [PDF] Comparison of Radar, Seismograph and Ballistocardiography and to Monitor Sleep at Home.   [PDF] Sentimental Analysis in Speech.  [PDF] Contact: Dr. Stephan Gerber ( [email protected] )

Internship in Computational Imaging at Prophesee

A 6 month intership at Prophesee, Grenoble is offered to a talented Master Student.

The topic of the internship is working on burst imaging following the work of Sam Hasinoff , and exploring ways to improve it using event-based vision.

A compensation to cover the expenses of living in Grenoble is offered. Only students that have legal rights to work in France can apply.

Anyone interested can send an email with the CV to Daniele Perrone ( [email protected] ).

Using machine learning applied to wearables to predict mental health

This Master’s project lies at the intersection of psychiatry and computer science and aims to use machine learning techniques to improve health. Using sensors to detect sleep and waking behavior has as of yet unexplored potential to reveal insights into health.  In this study, we make use of a watch-like device, called an actigraph, which tracks motion to quantify sleep behavior and waking activity. Participants in the study consist of healthy and depressed adolescents and wear actigraphs for a year during which time we query their mental health status monthly using online questionnaires.  For this masters thesis we aim to make use of machine learning methods to predict mental health based on the data from the actigraph. The ability to predict mental health crises based on sleep and wake behavior would provide an opportunity for intervention, significantly impacting the lives of patients and their families. This Masters thesis is a collaboration between Professor Paolo Favaro at the Institute of Computer Science ( [email protected] ) and Dr Leila Tarokh at the Universitäre Psychiatrische Dienste (UPD) ( [email protected] ).  We are looking for a highly motivated individual interested in bridging disciplines. 

Bachelor or Master Projects at the ARTORG Center

The Gerontechnology and Rehabilitation group at the ARTORG Center for Biomedical Engineering is offering multiple BSc- and MSc thesis projects to students, which are interested in working with real patient data, artificial intelligence and machine learning algorithms. The goal of these projects is to transfer the findings to the clinic in order to solve today’s healthcare problems and thus to improve the quality of life of patients. Machine Learning Based Gait-Parameter Extraction by Using Simple Rangefinder Technology.  [PDF] Detection of Motion in Video Recordings   [PDF] Home-Monitoring of Elderly by Radar  [PDF] Gait feature detection in Parkinson's Disease  [PDF] Development of an arthroscopic training device using virtual reality  [PDF] Contact: Dr. Stephan Gerber ( [email protected] ), Michael Single ( [email protected]. ch )

Dynamic Transformer

Level: bachelor.

Visual Transformers have obtained state of the art classification accuracies [ViT, DeiT, T2T, BoTNet]. Mixture of experts could be used to increase the capacity of a neural network by learning instance dependent execution pathways in a network [MoE]. In this research project we aim to push the transformers to their limit and combine their dynamic attention with MoEs, compared to Switch Transformer [Switch], we will use a much more efficient formulation of mixing [CondConv, DynamicConv] and we will use this idea in the attention part of the transformer, not the fully connected layer.

  • Input dependent attention kernel generation for better transformer layers.

Publication Opportunity: Dynamic Neural Networks Meets Computer Vision (a CVPR 2021 Workshop)

Extensions:

  • The same idea could be extended to other ViT/Transformer based models [DETR, SETR, LSTR, TrackFormer, BERT]

Related Papers:

  • Visual Transformers: Token-based Image Representation and Processing for Computer Vision [ViT]
  • DeiT: Data-efficient Image Transformers [DeiT]
  • Bottleneck Transformers for Visual Recognition [BoTNet]
  • Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet [T2TViT]
  • Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer [MoE]
  • Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity [Switch]
  • CondConv: Conditionally Parameterized Convolutions for Efficient Inference [CondConv]
  • Dynamic Convolution: Attention over Convolution Kernels [DynamicConv]
  • End-to-End Object Detection with Transformers [DETR]
  • Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers [SETR]
  • End-to-end Lane Shape Prediction with Transformers [LSTR]
  • TrackFormer: Multi-Object Tracking with Transformers [TrackFormer]
  • BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding [BERT]

Contact: Sepehr Sameni

Visual Transformers have obtained state of the art classification accuracies for 2d images[ViT, DeiT, T2T, BoTNet]. In this project, we aim to extend the same ideas to 3d data (videos), which requires a more efficient attention mechanism [Performer, Axial, Linformer]. In order to accelerate the training process, we could use [Multigrid] technique.

  • Better video understanding by attention blocks.

Publication Opportunity: LOVEU (a CVPR workshop) , Holistic Video Understanding (a CVPR workshop) , ActivityNet (a CVPR workshop)

  • Rethinking Attention with Performers [Performer]
  • Axial Attention in Multidimensional Transformers [Axial]
  • Linformer: Self-Attention with Linear Complexity [Linformer]
  • A Multigrid Method for Efficiently Training Video Models [Multigrid]

GIRAFFE is a newly introduced GAN that can generate scenes via composition with minimal supervision [GIRAFFE]. Generative methods can implicitly learn interpretable representation as can be seen in GAN image interpretations [GANSpace, GanLatentDiscovery]. Decoding GIRAFFE could give us per-object interpretable representations that could be used for scene manipulation, data augmentation, scene understanding, semantic segmentation, pose estimation [iNeRF], and more. 

In order to invert a GIRAFFE model, we will first train the generative model on Clevr and CompCars datasets, then we add a decoder to the pipeline and train this autoencoder. We can make the task easier by knowing the number of objects in the scene and/or knowing their positions. 

Goals:  

Scene Manipulation and Decomposition by Inverting the GIRAFFE 

Publication Opportunity:  DynaVis 2021 (a CVPR workshop on Dynamic Scene Reconstruction)  

Related Papers: 

  • GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields [GIRAFFE] 
  • Neural Scene Graphs for Dynamic Scenes 
  • pixelNeRF: Neural Radiance Fields from One or Few Images [pixelNeRF] 
  • NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis [NeRF] 
  • Neural Volume Rendering: NeRF And Beyond 
  • GANSpace: Discovering Interpretable GAN Controls [GANSpace] 
  • Unsupervised Discovery of Interpretable Directions in the GAN Latent Space [GanLatentDiscovery] 
  • Inverting Neural Radiance Fields for Pose Estimation [iNeRF] 

Quantized ViT

Visual Transformers have obtained state of the art classification accuracies [ViT, CLIP, DeiT], but the best ViT models are extremely compute heavy and running them even only for inference (not doing backpropagation) is expensive. Running transformers cheaply by quantization is not a new problem and it has been tackled before for BERT [BERT] in NLP [Q-BERT, Q8BERT, TernaryBERT, BinaryBERT]. In this project we will be trying to quantize pretrained ViT models. 

Quantizing ViT models for faster inference and smaller models without losing accuracy 

Publication Opportunity:  Binary Networks for Computer Vision 2021 (a CVPR workshop)  

Extensions:  

  • Having a fast pipeline for image inference with ViT will allow us to dig deep into the attention of ViT and analyze it, we might be able to prune some attention heads or replace them with static patterns (like local convolution or dilated patterns), We might be even able to replace the transformer with performer and increase the throughput even more [Performer]. 
  • The same idea could be extended to other ViT based models [DETR, SETR, LSTR, TrackFormer, CPTR, BoTNet, T2TViT] 
  • Learning Transferable Visual Models From Natural Language Supervision [CLIP] 
  • Visual Transformers: Token-based Image Representation and Processing for Computer Vision [ViT] 
  • DeiT: Data-efficient Image Transformers [DeiT] 
  • BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding [BERT] 
  • Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT [Q-BERT] 
  • Q8BERT: Quantized 8Bit BERT [Q8BERT] 
  • TernaryBERT: Distillation-aware Ultra-low Bit BERT [TernaryBERT] 
  • BinaryBERT: Pushing the Limit of BERT Quantization [BinaryBERT] 
  • Rethinking Attention with Performers [Performer] 
  • End-to-End Object Detection with Transformers [DETR] 
  • Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers [SETR] 
  • End-to-end Lane Shape Prediction with Transformers [LSTR] 
  • TrackFormer: Multi-Object Tracking with Transformers [TrackFormer] 
  • CPTR: Full Transformer Network for Image Captioning [CPTR] 
  • Bottleneck Transformers for Visual Recognition [BoTNet] 
  • Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet [T2TViT] 

Multimodal Contrastive Learning

Recently contrastive learning has gained a lot of attention for self-supervised image representation learning [SimCLR, MoCo]. Contrastive learning could be extended to multimodal data, like videos (images and audio) [CMC, CoCLR]. Most contrastive methods require large batch sizes (or large memory pools) which makes them expensive for training. In this project we are going to use non batch size dependent contrastive methods [SwAV, BYOL, SimSiam] to train multimodal representation extractors. 

Our main goal is to compare the proposed method with the CMC baseline, so we will be working with STL10, ImageNet, UCF101, HMDB51, and NYU Depth-V2 datasets. 

Inspired by the recent works on smaller datasets [ConVIRT, CPD], to accelerate the training speed, we could start with two pretrained single-modal models and finetune them with the proposed method.  

  • Extending SwAV to multimodal datasets 
  • Grasping a better understanding of the BYOL 

Publication Opportunity:  MULA 2021 (a CVPR workshop on Multimodal Learning and Applications)  

  • Most knowledge distillation methods for contrastive learners also use large batch sizes (or memory pools) [CRD, SEED], the proposed method could be extended for knowledge distillation. 
  • One could easily extend this idea to multiview learning, for example one could have two different networks working on the same input and train them with contrastive learning, this may lead to better models [DeiT] by cross-model inductive biases communications. 
  • Self-supervised Co-training for Video Representation Learning [CoCLR] 
  • Learning Spatiotemporal Features via Video and Text Pair Discrimination [CPD] 
  • Audio-Visual Instance Discrimination with Cross-Modal Agreement [AVID-CMA] 
  • Self-Supervised Learning by Cross-Modal Audio-Video Clustering [XDC] 
  • Contrastive Multiview Coding [CPC] 
  • Contrastive Learning of Medical Visual Representations from Paired Images and Text [ConVIRT] 
  • A Simple Framework for Contrastive Learning of Visual Representations [SimCLR] 
  • Momentum Contrast for Unsupervised Visual Representation Learning [MoCo] 
  • Bootstrap your own latent: A new approach to self-supervised Learning [BYOL] 
  • Exploring Simple Siamese Representation Learning [SimSiam] 
  • Unsupervised Learning of Visual Features by Contrasting Cluster Assignments [SwAV] 
  • Contrastive Representation Distillation [CRD] 
  • SEED: Self-supervised Distillation For Visual Representation [SEED] 

Robustness of Neural Networks

Neural Networks have been found to achieve surprising performance in several tasks such as classification, detection and segmentation. However, they are also very sensitive to small (controlled) changes to the input. It has been shown that some changes to an image that are not visible to the naked eye may lead the network to output an incorrect label. This thesis will focus on studying recent progress in this area and aim to build a procedure for a trained network to self-assess its reliability in classification or one of the popular computer vision tasks.

Contact: Paolo Favaro

Masters projects at sitem center

The Personalised Medicine Research Group at the sitem Center for Translational Medicine and Biomedical Entrepreneurship is offering multiple MSc thesis projects to the biomed eng MSc students that may also be of interest to the computer science students. Automated quantification of cartilage quality for hip treatment decision support.  PDF Automated quantification of massive rotator cuff tears from MRI. PDF Deep learning-based segmentation and fat fraction analysis of the shoulder muscles using quantitative MRI. PDF Unsupervised Domain Adaption for Cross-Modality Hip Joint Segmentation. PDF Contact:  Dr. Kate Gerber

Internships/Master thesis @ Chronocam

3-6 months internships on event-based computer vision. Chronocam is a rapidly growing startup developing event-based technology, with more than 15 PhDs working on problems like tracking, detection, classification, SLAM, etc. Event-based computer vision has the potential to solve many long-standing problems in traditional computer vision, and this is a super exciting time as this potential is becoming more and more tangible in many real-world applications. For next year we are looking for motivated Master and PhD students with good software engineering skills (C++ and/or python), and preferable good computer vision and deep learning background. PhD internships will be more research focused and possibly lead to a publication.  For each intern we offer a compensation to cover the expenses of living in Paris.  List of some of the topics we want to explore:

  • Photo-realistic image synthesis and super-resolution from event-based data (PhD)
  • Self-supervised representation learning (PhD)
  • End-to-end Feature Learning for Event-based Data
  • Bio-inspired Filtering using Spiking Networks
  • On-the fly Compression of Event-based Streams for Low-Power IoT Cameras
  • Tracking of Multiple Objects with a Dual-Frequency Tracker
  • Event-based Autofocus
  • Stabilizing an Event-based Stream using an IMU
  • Crowd Monitoring for Low-power IoT Cameras
  • Road Extraction from an Event-based Camera Mounted in a Car for Autonomous Driving
  • Sign detection from an Event-based Camera Mounted in a Car for Autonomous Driving
  • High-frequency Eye Tracking

Email with attached CV to Daniele Perrone at  [email protected] .

Contact: Daniele Perrone

Object Detection in 3D Point Clouds

Today we have many 3D scanning techniques that allow us to capture the shape and appearance of objects. It is easier than ever to scan real 3D objects and transform them into a digital model for further processing, such as modeling, rendering or animation. However, the output of a 3D scanner is often a raw point cloud with little to no annotations. The unstructured nature of the point cloud representation makes it difficult for processing, e.g. surface reconstruction. One application is the detection and segmentation of an object of interest.  In this project, the student is challenged to design a system that takes a point cloud (a 3D scan) as input and outputs the names of objects contained in the scan. This output can then be used to eliminate outliers or points that belong to the background. The approach involves collecting a large dataset of 3D scans and training a neural network on it.

Contact: Adrian Wälchli

Shape Reconstruction from a Single RGB Image or Depth Map

A photograph accurately captures the world in a moment of time and from a specific perspective. Since it is a projection of the 3D space to a 2D image plane, the depth information is lost. Is it possible to restore it, given only a single photograph? In general, the answer is no. This problem is ill-posed, meaning that many different plausible depth maps exist, and there is no way of telling which one is the correct one.  However, if we cover one of our eyes, we are still able to recognize objects and estimate how far away they are. This motivates the exploration of an approach where prior knowledge can be leveraged to reduce the ill-posedness of the problem. Such a prior could be learned by a deep neural network, trained with many images and depth maps.

CNN Based Deblurring on Mobile

Deblurring finds many applications in our everyday life. It is particularly useful when taking pictures on handheld devices (e.g. smartphones) where camera shake can degrade important details. Therefore, it is desired to have a good deblurring algorithm implemented directly in the device.  In this project, the student will implement and optimize a state-of-the-art deblurring method based on a deep neural network for deployment on mobile phones (Android).  The goal is to reduce the number of network weights in order to reduce the memory footprint while preserving the quality of the deblurred images. The result will be a camera app that automatically deblurs the pictures, giving the user a choice of keeping the original or the deblurred image.

Depth from Blur

If an object in front of the camera or the camera itself moves while the aperture is open, the region of motion becomes blurred because the incoming light is accumulated in different positions across the sensor. If there is camera motion, there is also parallax. Thus, a motion blurred image contains depth information.  In this project, the student will tackle the problem of recovering a depth-map from a motion-blurred image. This includes the collection of a large dataset of blurred- and sharp images or videos using a pair or triplet of GoPro action cameras. Two cameras will be used in stereo to estimate the depth map, and the third captures the blurred frames. This data is then used to train a convolutional neural network that will predict the depth map from the blurry image.

Unsupervised Clustering Based on Pretext Tasks

The idea of this project is that we have two types of neural networks that work together: There is one network A that assigns images to k clusters and k (simple) networks of type B perform a self-supervised task on those clusters. The goal of all the networks is to make the k networks of type B perform well on the task. The assumption is that clustering in semantically similar groups will help the networks of type B to perform well. This could be done on the MNIST dataset with B being linear classifiers and the task being rotation prediction.

Adversarial Data-Augmentation

The student designs a data augmentation network that transforms training images in such a way that image realism is preserved (e.g. with a constrained spatial transformer network) and the transformed images are more difficult to classify (trained via adversarial loss against an image classifier). The model will be evaluated for different data settings (especially in the low data regime), for example on the MNIST and CIFAR datasets.

Unsupervised Learning of Lip-reading from Videos

People with sensory impairment (hearing, speech, vision) depend heavily on assistive technologies to communicate and navigate in everyday life. The mass production of media content today makes it impossible to manually translate everything into a common language for assistive technologies, e.g. captions or sign language.  In this project, the student employs a neural network to learn a representation for lip-movement in videos in an unsupervised fashion, possibly with an encoder-decoder structure where the decoder reconstructs the audio signal. This requires collecting a large dataset of videos (e.g. from YouTube) of speakers or conversations where lip movement is visible. The outcome will be a neural network that learns an audio-visual representation of lip movement in videos, which can then be leveraged to generate captions for hearing impaired persons.

Learning to Generate Topographic Maps from Satellite Images

Satellite images have many applications, e.g. in meteorology, geography, education, cartography and warfare. They are an accurate and detailed depiction of the surface of the earth from above. Although it is relatively simple to collect many satellite images in an automated way, challenges arise when processing them for use in navigation and cartography. The idea of this project is to automatically convert an arbitrary satellite image, of e.g. a city, to a map of simple 2D shapes (streets, houses, forests) and label them with colors (semantic segmentation). The student will collect a dataset of satellite image and topological maps and train a deep neural network that learns to map from one domain to the other. The data could be obtained from a Google Maps database or similar.

New Variables of Brain Morphometry: the Potential and Limitations of CNN Regression

Timo blattner · sept. 2022.

The calculation of variables of brain morphology is computationally very expensive and time-consuming. A previous work showed the feasibility of ex- tracting the variables directly from T1-weighted brain MRI images using a con- volutional neural network. We used significantly more data and extended their model to a new set of neuromorphological variables, which could become inter- esting biomarkers in the future for the diagnosis of brain diseases. The model shows for nearly all subjects a less than 5% mean relative absolute error. This high relative accuracy can be attributed to the low morphological variance be- tween subjects and the ability of the model to predict the cortical atrophy age trend. The model however fails to capture all the variance in the data and shows large regional differences. We attribute these limitations in part to the moderate to poor reliability of the ground truth generated by FreeSurfer. We further investigated the effects of training data size and model complexity on this regression task and found that the size of the dataset had a significant impact on performance, while deeper models did not perform better. Lack of interpretability and dependence on a silver ground truth are the main drawbacks of this direct regression approach.

Home Monitoring by Radar

Lars ziegler · sept. 2022.

Detection and tracking of humans via UWB radars is a promising and continuously evolving field with great potential for medical technology. This contactless method of acquiring data of a patients movement patterns is ideal for in home application. As irregularities in a patients movement patterns are an indicator for various health problems including neurodegenerative diseases, the insight this data could provide may enable earlier detection of such problems. In this thesis a signal processing pipeline is presented with which a persons movement is modeled. During an experiment 142 measurements were recorded by two separate radar systems and one lidar system which each consisted of multiple sensors. The models that were calculated on these measurements by the signal processing pipeline were used to predict the times when a person stood up or sat down. The predictions showed an accuracy of 72.2%.

Revisiting non-learning based 3D reconstruction from multiple images

Aaron sägesser · oct. 2021.

Arthroscopy consists of challenging tasks and requires skills that even today, young surgeons still train directly throughout the surgery. Existing simulators are expensive and rarely available. Through the growing potential of virtual reality(VR) (head-mounted) devices for simulation and their applicability in the medical context, these devices have become a promising alternative that would be orders of magnitude cheaper and could be made widely available. To build a VR-based training device for arthroscopy is the overall aim of our project, as this would be of great benefit and might even be applicable in other minimally invasive surgery (MIS). This thesis marks a first step of the project with its focus to explore and compare well-known algorithms in a multi-view stereo (MVS) based 3D reconstruction with respect to imagery acquired by an arthroscopic camera. Simultaneously with this reconstruction, we aim to gain essential measures to compare the VR environment to the real world, as validation of the realism of future VR tasks. We evaluate 3 different feature extraction algorithms with 3 different matching techniques and 2 different algorithms for the estimation of the fundamental (F) matrix. The evaluation of these 18 different setups is made with a reconstruction pipeline embedded in a jupyter notebook implemented in python based on common computer vision libraries and compared with imagery generated with a mobile phone as well as with the reconstruction results of state-of-the-art (SOTA) structure-from-motion (SfM) software COLMAP and Multi-View Environment (MVE). Our comparative analysis manifests the challenges of heavy distortion, the fish-eye shape and weak image quality of arthroscopic imagery, as all results are substantially worse using this data. However, there are huge differences regarding the different setups. Scale Invariant Feature Transform (SIFT) and Oriented FAST Rotated BRIEF (ORB) in combination with k-Nearest Neighbour (kNN) matching and Least Median of Squares (LMedS) present the most promising results. Overall, the 3D reconstruction pipeline is a useful tool to foster the process of gaining measurements from the arthroscopic exploration device and to complement the comparative research in this context.

Examination of Unsupervised Representation Learning by Predicting Image Rotations

Eric lagger · sept. 2020.

In recent years deep convolutional neural networks achieved a lot of progress. To train such a network a lot of data is required and in supervised learning algorithms it is necessary that the data is labeled. To label data there is a lot of human work needed and this takes a lot of time and money to be done. To avoid the inconveniences that come with this we would like to find systems that don’t need labeled data and therefore are unsupervised learning algorithms. This is the importance of unsupervised algorithms, even though their outcome is not yet on the same qualitative level as supervised algorithms. In this thesis we will discuss an approach of such a system and compare the results to other papers. A deep convolutional neural network is trained to learn the rotations that have been applied to a picture. So we take a large amount of images and apply some simple rotations and the task of the network is to discover in which direction the image has been rotated. The data doesn’t need to be labeled to any category or anything else. As long as all the pictures are upside down we hope to find some high dimensional patterns for the network to learn.

StitchNet: Image Stitching using Autoencoders and Deep Convolutional Neural Networks

Maurice rupp · sept. 2019.

This thesis explores the prospect of artificial neural networks for image processing tasks. More specifically, it aims to achieve the goal of stitching multiple overlapping images to form a bigger, panoramic picture. Until now, this task is solely approached with ”classical”, hardcoded algorithms while deep learning is at most used for specific subtasks. This thesis introduces a novel end-to-end neural network approach to image stitching called StitchNet, which uses a pre-trained autoencoder and deep convolutional networks. Additionally to presenting several new datasets for the task of supervised image stitching with each 120’000 training and 5’000 validation samples, this thesis also conducts various experiments with different kinds of existing networks designed for image superresolution and image segmentation adapted to the task of image stitching. StitchNet outperforms most of the adapted networks in both quantitative as well as qualitative results.

Facial Expression Recognition in the Wild

Luca rolshoven · sept. 2019.

The idea of inferring the emotional state of a subject by looking at their face is nothing new. Neither is the idea of automating this process using computers. Researchers used to computationally extract handcrafted features from face images that had proven themselves to be effective and then used machine learning techniques to classify the facial expressions using these features. Recently, there has been a trend towards using deeplearning and especially Convolutional Neural Networks (CNNs) for the classification of these facial expressions. Researchers were able to achieve good results on images that were taken in laboratories under the same or at least similar conditions. However, these models do not perform very well on more arbitrary face images with different head poses and illumination. This thesis aims to show the challenges of Facial Expression Recognition (FER) in this wild setting. It presents the currently used datasets and the present state-of-the-art results on one of the biggest facial expression datasets currently available. The contributions of this thesis are twofold. Firstly, I analyze three famous neural network architectures and their effectiveness on the classification of facial expressions. Secondly, I present two modifications of one of these networks that lead to the proposed STN-COV model. While this model does not outperform all of the current state-of-the-art models, it does beat several ones of them.

A Study of 3D Reconstruction of Varying Objects with Deformable Parts Models

Raoul grossenbacher · july 2019.

This work covers a new approach to 3D reconstruction. In traditional 3D reconstruction one uses multiple images of the same object to calculate a 3D model by taking information gained from the differences between the images, like camera position, illumination of the images, rotation of the object and so on, to compute a point cloud representing the object. The characteristic trait shared by all these approaches is that one can almost change everything about the image, but it is not possible to change the object itself, because one needs to find correspondences between the images. To be able to use different instances of the same object, we used a 3D DPM model that can find different parts of an object in an image, thereby detecting the correspondences between the different pictures, which we then can use to calculate the 3D model. To take this theory to practise, we gave a 3D DPM model, which was trained to detect cars, pictures of different car brands, where no pair of images showed the same vehicle and used the detected correspondences and the Factorization Method to compute the 3D point cloud. This technique leads to a completely new approach in 3D reconstruction, because changing the object itself was never done before.

Motion deblurring in the wild replication and improvements

Alvaro juan lahiguera · jan. 2019, coma outcome prediction with convolutional neural networks, stefan jonas · oct. 2018, automatic correction of self-introduced errors in source code, sven kellenberger · aug. 2018, neural face transfer: training a deep neural network to face-swap, till nikolaus schnabel · july 2018.

This thesis explores the field of artificial neural networks with realistic looking visual outputs. It aims at morphing face pictures of a specific identity to look like another individual by only modifying key features, such as eye color, while leaving identity-independent features unchanged. Prior works have covered the topic of symmetric translation between two specific domains but failed to optimize it on faces where only parts of the image may be changed. This work applies a face masking operation to the output at training time, which forces the image generator to preserve colors while altering the face, fitting it naturally inside the unmorphed surroundings. Various experiments are conducted including an ablation study on the final setting, decreasing the baseline identity switching performance from 81.7% to 75.8 % whilst improving the average χ2 color distance from 0.551 to 0.434. The provided code-based software gives users easy access to apply this neural face swap to images and videos of arbitrary crop and brings Computer Vision one step closer to replacing Computer Graphics in this specific area.

A Study of the Importance of Parts in the Deformable Parts Model

Sammer puran · june 2017, self-similarity as a meta feature, lucas husi · april 2017, a study of 3d deformable parts models for detection and pose-estimation, simon jenni · march 2015, accelerated federated learning on client silos with label noise: rho selection in classification and segmentation, irakli kelbakiani · may 2024.

Federated Learning has recently gained more research interest. This increased attention is caused by factors including the growth of decentralized data, privacy concerns, and new privacy regulations. In Federated Learning, remote servers keep training a model on local datasets independently, and subsequently, local models are aggregated into a global model, which achieves better overall performance. Sending local model weights instead of the entire dataset is a significant advantage of Federated Learning over centralized classical machine learning algorithms. Federated learning involves uploading and downloading model parameters multiple times, so there are multiple communication rounds between the global server and remote client servers, which imposes challenges. The high number of necessary communication rounds not only increases high-cost communication overheads but is also a critical limitation for servers with low network bandwidth, which leads to latency and a higher probability of training failures caused by communication breakdowns. To mitigate these challenges, we aim to provide a fast-convergent Federated Learning training methodology that decreases the number of necessary communication rounds. We found a paper about Reducible Holdout Loss Selection (RHO-Loss) batch selection methodology, which ”selects low-noise, task-relevant, non-redundant points for training” [1]; we hypothesize, if client silos employ RHO-Loss methodology and successfully avoid training their local models on noisy and non-relevant samples, clients may offer stable and consistent updates to the global server, which could lead to faster convergence of the global model. Our contribution focuses on investigating the RHO-Loss method in a simulated federated setting for the Clothing1M dataset. We also examine its applicability to medical datasets and check its effectiveness in a simulated federated environment. Our experimental results show a promising outcome, specifically a reduction in communication rounds for the Clothing1M dataset. However, as the success of the RHO-Loss selection method depends on the availability of sufficient training data for the target RHO model and for the Irreducible RHO model, we emphasize that our contribution applies to those Federated Learning scenarios where client silos hold enough training data to successfully train and benefit from their RHO model on their local dataset.

Amodal Leaf Segmentation

Nicolas maier · nov. 2023.

Plant phenotyping is the process of measuring and analyzing various traits of plants. It provides essential information on how genetic and environmental factors affect plant growth and development. Manual phenotyping is highly time-consuming; therefore, many computer vision and machine learning based methods have been proposed in the past years to perform this task automatically based on images of the plants. However, the publicly available datasets (in particular, of Arabidopsis thaliana) are limited in size and diversity, making them unsuitable to generalize to new unseen environments. In this work, we propose a complete pipeline able to automatically extract traits of interest from an image of Arabidopsis thaliana. Our method uses a minimal amount of existing annotated data from a source domain to generate a large synthetic dataset adapted to a different target domain (e.g., different backgrounds, lighting conditions, and plant layouts). In addition, unlike the source dataset, the synthetic one provides ground-truth annotations for the occluded parts of the leaves, which are relevant when measuring some characteristics of the plant, e.g., its total area. This synthetic dataset is then used to train a model to perform amodal instance segmentation of the leaves to obtain the total area, leaf count, and color of each plant. To validate our approach, we create a small dataset composed of manually annotated real images of Arabidopsis thaliana, which is used to assess the performance of the models.

Assessment of movement and pose in a hospital bed by ambient and wearable sensor technology in healthy subjects

Tony licata · sept. 2022.

The use of automated systems describing the human motion has become possible in various domains. Most of the proposed systems are designed to work with people moving around in a standing position. Because such system could be interesting in a medical environment, we propose in this work a pipeline that can effectively predict human motion from people lying on beds. The proposed pipeline is tested with a data set composed of 41 participants executing 7 predefined tasks in a bed. The motion of the participants is measured with video cameras, accelerometers and pressure mat. Various experiments are carried with the information retrieved from the data set. Two approaches combining the data from the different measure technologies are explored. The performance of the different carried experiments is measured, and the proposed pipeline is composed with components providing the best results. Later on, we show that the proposed pipeline only needs to use the video cameras, which make the proposed environment easier to implement in real life situations.

Machine Learning Based Prediction of Mental Health Using Wearable-measured Time Series

Seyedeh sharareh mirzargar · sept. 2022.

Depression is the second major cause for years spent in disability and has a growing prevalence in adolescents. The recent Covid-19 pandemic has intensified the situation and limited in-person patient monitoring due to distancing measures. Recent advances in wearable devices have made it possible to record the rest/activity cycle remotely with high precision and in real-world contexts. We aim to use machine learning methods to predict an individual's mental health based on wearable-measured sleep and physical activity. Predicting an impending mental health crisis of an adolescent allows for prompt intervention, detection of depression onset or its recursion, and remote monitoring. To achieve this goal, we train three primary forecasting models; linear regression, random forest, and light gradient boosted machine (LightGBM); and two deep learning models; block recurrent neural network (block RNN) and temporal convolutional network (TCN); on Actigraph measurements to forecast mental health in terms of depression, anxiety, sleepiness, stress, sleep quality, and behavioral problems. Our models achieve a high forecasting performance, the random forest being the winner to reach an accuracy of 98% for forecasting the trait anxiety. We perform extensive experiments to evaluate the models' performance in accuracy, generalization, and feature utilization, using a naive forecaster as the baseline. Our analysis shows minimal mental health changes over two months, making the prediction task easily achievable. Due to these minimal changes in mental health, the models tend to primarily use the historical values of mental health evaluation instead of Actigraph features. At the time of this master thesis, the data acquisition step is still in progress. In future work, we plan to train the models on the complete dataset using a longer forecasting horizon to increase the level of mental health changes and perform transfer learning to compensate for the small dataset size. This interdisciplinary project demonstrates the opportunities and challenges in machine learning based prediction of mental health, paving the way toward using the same techniques to forecast other mental disorders such as internalizing disorder, Parkinson's disease, Alzheimer's disease, etc. and improving the quality of life for individuals who have some mental disorder.

CNN Spike Detector: Detection of Spikes in Intracranial EEG using Convolutional Neural Networks

Stefan jonas · oct. 2021.

The detection of interictal epileptiform discharges in the visual analysis of electroencephalography (EEG) is an important but very difficult, tedious, and time-consuming task. There have been decades of research on computer-assisted detection algorithms, most recently focused on using Convolutional Neural Networks (CNNs). In this thesis, we present the CNN Spike Detector, a convolutional neural network to detect spikes in intracranial EEG. Our dataset of 70 intracranial EEG recordings from 26 subjects with epilepsy introduces new challenges in this research field. We report cross-validation results with a mean AUC of 0.926 (+- 0.04), an area under the precision-recall curve (AUPRC) of 0.652 (+- 0.10) and 12.3 (+- 7.47) false positive epochs per minute for a sensitivity of 80%. A visual examination of false positive segments is performed to understand the model behavior leading to a relatively high false detection rate. We notice issues with the evaluation measures and highlight a major limitation of the common approach of detecting spikes using short segments, namely that the network is not capable to consider the greater context of the segment with regards to its origination. For this reason, we present the Context Model, an extension in which the CNN Spike Detector is supplied with additional information about the channel. Results show promising but limited performance improvements. This thesis provides important findings about the spike detection task for intracranial EEG and lays out promising future research directions to develop a network capable of assisting experts in real-world clinical applications.

PolitBERT - Deepfake Detection of American Politicians using Natural Language Processing

Maurice rupp · april 2021.

This thesis explores the application of modern Natural Language Processing techniques to the detection of artificially generated videos of popular American politicians. Instead of focusing on detecting anomalies and artifacts in images and sounds, this thesis focuses on detecting irregularities and inconsistencies in the words themselves, opening up a new possibility to detect fake content. A novel, domain-adapted, pre-trained version of the language model BERT combined with several mechanisms to overcome severe dataset imbalances yielded the best quantitative as well as qualitative results. Additionally to the creation of the biggest publicly available dataset of English-speaking politicians consisting of 1.5 M sentences from over 1000 persons, this thesis conducts various experiments with different kinds of text classification and sequence processing algorithms applied to the political domain. Furthermore, multiple ablations to manage severe data imbalance are presented and evaluated.

A Study on the Inversion of Generative Adversarial Networks

Ramona beck · march 2021.

The desire to use generative adversarial networks (GANs) for real-world tasks such as object segmentation or image manipulation is increasing as synthesis quality improves, which has given rise to an emerging research area called GAN inversion that focuses on exploring methods for embedding real images into the latent space of a GAN. In this work, we investigate different GAN inversion approaches using an existing generative model architecture that takes a completely unsupervised approach to object segmentation and is based on StyleGAN2. In particular, we propose and analyze algorithms for embedding real images into the different latent spaces Z, W, and W+ of StyleGAN following an optimization-based inversion approach, while also investigating a novel approach that allows fine-tuning of the generator during the inversion process. Furthermore, we investigate a hybrid and a learning-based inversion approach, where in the former we train an encoder with embeddings optimized by our best optimization-based inversion approach, and in the latter we define an autoencoder, consisting of an encoder and the generator of our generative model as a decoder, and train it to map an image into the latent space. We demonstrate the effectiveness of our methods as well as their limitations through a quantitative comparison with existing inversion methods and by conducting extensive qualitative and quantitative experiments with synthetic data as well as real images from a complex image dataset. We show that we achieve qualitatively satisfying embeddings in the W and W+ spaces with our optimization-based algorithms, that fine-tuning the generator during the inversion process leads to qualitatively better embeddings in all latent spaces studied, and that the learning-based approach also benefits from a variable generator as well as a pre-training with our hybrid approach. Furthermore, we evaluate our approaches on the object segmentation task and show that both our optimization-based and our hybrid and learning-based methods are able to generate meaningful embeddings that achieve reasonable object segmentations. Overall, our proposed methods illustrate the potential that lies in the GAN inversion and its application to real-world tasks, especially in the relaxed version of the GAN inversion where the weights of the generator are allowed to vary.

Multi-scale Momentum Contrast for Self-supervised Image Classification

Zhao xueqi · dec. 2020.

With the maturity of supervised learning technology, people gradually shift the research focus to the field of self-supervised learning. ”Momentum Contrast” (MoCo) proposes a new self-supervised learning method and raises the correct rate of self-supervised learning to a new level. Inspired by another article ”Representation Learning by Learning to Count”, if a picture is divided into four parts and passed through a neural network, it is possible to further improve the accuracy of MoCo. Different from the original MoCo, this MoCo variant (Multi-scale MoCo) does not directly pass the image through the encoder after the augmented images. Multi-scale MoCo crops and resizes the augmented images, and the obtained four parts are respectively passed through the encoder and then summed (upsampled version do not do resize to input but resize the contrastive samples). This method of images crop is not only used for queue q but also used for comparison queue k, otherwise the weights of queue k might be damaged during the moment update. This will further discussed in the experiments chapter between downsampled Multi-scale version and downsampled both Multi-scale version. Human beings also have the same principle of object recognition: when human beings see something they are familiar with, even if the object is not fully displayed, people can still guess the object itself with a high probability. Because of this, Multi-scale MoCo applies this concept to the pretext part of MoCo, hoping to obtain better feature extraction. In this thesis, there are three versions of Multi-scale MoCo, downsampled input samples version, downsampled input samples and contrast samples version and upsampled input samples version. The differences between these versions will be described in more detail later. The neural network architecture comparison includes ResNet50 , and the tested data set is STL-10. The weights obtained in pretext will be transferred to self-supervised learning, and in the process of self-supervised learning, the weights of other layers except the final linear layer are frozen without changing (these weights come from pretext).

Self-Supervised Learning Using Siamese Networks and Binary Classifier

Dušan mihajlov · march 2020.

In this thesis, we present several approaches for training a convolutional neural network using only unlabeled data. Our autonomously supervised learning algorithms are based on connections between image patch i. e. zoomed image and its original. Using the siamese architecture neural network we aim to recognize, if the image patch, which is input to the first neural network part, comes from the same image presented to the second neural network part. By applying transformations to both images, and different zoom sizes at different positions, we force the network to extract high level features using its convolutional layers. At the top of our siamese architecture, we have a simple binary classifier that measures the difference between feature maps that we extract and makes a decision. Thus, the only way that the classifier will solve the task correctly is when our convolutional layers are extracting useful representations. Those representations we can than use to solve many different tasks that are related to the data used for unsupervised training. As the main benchmark for all of our models, we used STL10 dataset, where we train a linear classifier on the top of our convolutional layers with a small amount of manually labeled images, which is a widely used benchmark for unsupervised learning tasks. We also combine our idea with recent work on the same topic, and the network called RotNet, which makes use of image rotations and therefore forces the network to learn rotation dependent features from the dataset. As a result of this combination we create a new procedure that outperforms original RotNet.

Learning Object Representations by Mixing Scenes

Lukas zbinden · may 2019.

In the digital age of ever increasing data amassment and accessibility, the demand for scalable machine learning models effective at refining the new oil is unprecedented. Unsupervised representation learning methods present a promising approach to exploit this invaluable yet unlabeled digital resource at scale. However, a majority of these approaches focuses on synthetic or simplified datasets of images. What if a method could learn directly from natural Internet-scale image data? In this thesis, we propose a novel approach for unsupervised learning of object representations by mixing natural image scenes. Without any human help, our method mixes visually similar images to synthesize new realistic scenes using adversarial training. In this process the model learns to represent and understand the objects prevalent in natural image data and makes them available for downstream applications. For example, it enables the transfer of objects from one scene to another. Through qualitative experiments on complex image data we show the effectiveness of our method along with its limitations. Moreover, we benchmark our approach quantitatively against state-of-the-art works on the STL-10 dataset. Our proposed method demonstrates the potential that lies in learning representations directly from natural image data and reinforces it as a promising avenue for future research.

Representation Learning using Semantic Distances

Markus roth · may 2019, zero-shot learning using generative adversarial networks, hamed hemati · dec. 2018, dimensionality reduction via cnns - learning the distance between images, ioannis glampedakis · sept. 2018, learning to play othello using deep reinforcement learning and self play, thomas simon steinmann · sept. 2018, aba-j interactive multi-modality tissue sectionto-volume alignment: a brain atlasing toolkit for imagej, felix meyenhofer · march 2018, learning visual odometry with recurrent neural networks, adrian wälchli · feb. 2018.

In computer vision, Visual Odometry is the problem of recovering the camera motion from a video. It is related to Structure from Motion, the problem of reconstructing the 3D geometry from a collection of images. Decades of research in these areas have brought successful algorithms that are used in applications like autonomous navigation, motion capture, augmented reality and others. Despite the success of these prior works in real-world environments, their robustness is highly dependent on manual calibration and the magnitude of noise present in the images in form of, e.g., non-Lambertian surfaces, dynamic motion and other forms of ambiguity. This thesis explores an alternative approach to the Visual Odometry problem via Deep Learning, that is, a specific form of machine learning with artificial neural networks. It describes and focuses on the implementation of a recent work that proposes the use of Recurrent Neural Networks to learn dependencies over time due to the sequential nature of the input. Together with a convolutional neural network that extracts motion features from the input stream, the recurrent part accumulates knowledge from the past to make camera pose estimations at each point in time. An analysis on the performance of this system is carried out on real and synthetic data. The evaluation covers several ways of training the network as well as the impact and limitations of the recurrent connection for Visual Odometry.

Crime location and timing prediction

Bernard swart · jan. 2018, from cartoons to real images: an approach to unsupervised visual representation learning, simon jenni · feb. 2017, automatic and large-scale assessment of fluid in retinal oct volume, nina mujkanovic · dec. 2016, segmentation in 3d using eye-tracking technology, michele wyss · july 2016, accurate scale thresholding via logarithmic total variation prior, remo diethelm · aug. 2014, novel techniques for robust and generalizable machine learning, abdelhak lemkhenter · sept. 2023.

Neural networks have transcended their status of powerful proof-of-concept machine learning into the realm of a highly disruptive technology that has revolutionized many quantitative fields such as drug discovery, autonomous vehicles, and machine translation. Today, it is nearly impossible to go a single day without interacting with a neural network-powered application. From search engines to on-device photo-processing, neural networks have become the go-to solution thanks to recent advances in computational hardware and an unprecedented scale of training data. Larger and less curated datasets, typically obtained through web crawling, have greatly propelled the capabilities of neural networks forward. However, this increase in scale amplifies certain challenges associated with training such models. Beyond toy or carefully curated datasets, data in the wild is plagued with biases, imbalances, and various noisy components. Given the larger size of modern neural networks, such models run the risk of learning spurious correlations that fail to generalize beyond their training data. This thesis addresses the problem of training more robust and generalizable machine learning models across a wide range of learning paradigms for medical time series and computer vision tasks. The former is a typical example of a low signal-to-noise ratio data modality with a high degree of variability between subjects and datasets. There, we tailor the training scheme to focus on robust patterns that generalize to new subjects and ignore the noisier and subject-specific patterns. To achieve this, we first introduce a physiologically inspired unsupervised training task and then extend it by explicitly optimizing for cross-dataset generalization using meta-learning. In the context of image classification, we address the challenge of training semi-supervised models under class imbalance by designing a novel label refinement strategy with higher local sensitivity to minority class samples while preserving the global data distribution. Lastly, we introduce a new Generative Adversarial Networks training loss. Such generative models could be applied to improve the training of subsequent models in the low data regime by augmenting the dataset using generated samples. Unfortunately, GAN training relies on a delicate balance between its components, making it prone mode collapse. Our contribution consists of defining a more principled GAN loss whose gradients incentivize the generator model to seek out missing modes in its distribution. All in all, this thesis tackles the challenge of training more robust machine learning models that can generalize beyond their training data. This necessitates the development of methods specifically tailored to handle the diverse biases and spurious correlations inherent in the data. It is important to note that achieving greater generalizability in models goes beyond simply increasing the volume of data; it requires meticulous consideration of training objectives and model architecture. By tackling these challenges, this research contributes to advancing the field of machine learning and underscores the significance of thoughtful design in obtaining more resilient and versatile models.

Automated Sleep Scoring, Deep Learning and Physician Supervision

Luigi fiorillo · oct. 2022.

Sleep plays a crucial role in human well-being. Polysomnography is used in sleep medicine as a diagnostic tool, so as to objectively analyze the quality of sleep. Sleep scoring is the procedure of extracting sleep cycle information from the wholenight electrophysiological signals. The scoring is done worldwide by the sleep physicians according to the official American Academy of Sleep Medicine (AASM) scoring manual. In the last decades, a wide variety of deep learning based algorithms have been proposed to automatise the sleep scoring task. In this thesis we study the reasons why these algorithms fail to be introduced in the daily clinical routine, with the perspective of bridging the existing gap between the automatic sleep scoring models and the sleep physicians. In this light, the primary step is the design of a simplified sleep scoring architecture, also providing an estimate of the model uncertainty. Beside achieving results on par with most up-to-date scoring systems, we demonstrate the efficiency of ensemble learning based algorithms, together with label smoothing techniques, in both enhancing the performance and calibrating the simplified scoring model. We introduced an uncertainty estimate procedure, so as to identify the most challenging sleep stage predictions, and to quantify the disagreement between the predictions given by the model and the annotation given by the physicians. In this thesis we also propose a novel method to integrate the inter-scorer variability into the training procedure of a sleep scoring model. We clearly show that a deep learning model is able to encode this variability, so as to better adapt to the consensus of a group of scorers-physicians. We finally address the generalization ability of a deep learning based sleep scoring system, further studying its resilience to the sleep complexity and to the AASM scoring rules. We can state that there is no need to train the algorithm strictly following the AASM guidelines. Most importantly, using data from multiple data centers results in a better performing model compared with training on a single data cohort. The variability among different scorers and data centers needs to be taken into account, more than the variability among sleep disorders.

Learning Representations for Controllable Image Restoration

Givi meishvili · march 2022.

Deep Convolutional Neural Networks have sparked a renaissance in all the sub-fields of computer vision. Tremendous progress has been made in the area of image restoration. The research community has pushed the boundaries of image deblurring, super-resolution, and denoising. However, given a distorted image, most existing methods typically produce a single restored output. The tasks mentioned above are inherently ill-posed, leading to an infinite number of plausible solutions. This thesis focuses on designing image restoration techniques capable of producing multiple restored results and granting users more control over the restoration process. Towards this goal, we demonstrate how one could leverage the power of unsupervised representation learning. Image restoration is vital when applied to distorted images of human faces due to their social significance. Generative Adversarial Networks enable an unprecedented level of generated facial details combined with smooth latent space. We leverage the power of GANs towards the goal of learning controllable neural face representations. We demonstrate how to learn an inverse mapping from image space to these latent representations, tuning these representations towards a specific task, and finally manipulating latent codes in these spaces. For example, we show how GANs and their inverse mappings enable the restoration and editing of faces in the context of extreme face super-resolution and the generation of novel view sharp videos from a single motion-blurred image of a face. This thesis also addresses more general blind super-resolution, denoising, and scratch removal problems, where blur kernels and noise levels are unknown. We resort to contrastive representation learning and first learn the latent space of degradations. We demonstrate that the learned representation allows inference of ground-truth degradation parameters and can guide the restoration process. Moreover, it enables control over the amount of deblurring and denoising in the restoration via manipulation of latent degradation features.

Learning Generalizable Visual Patterns Without Human Supervision

Simon jenni · oct. 2021.

Owing to the existence of large labeled datasets, Deep Convolutional Neural Networks have ushered in a renaissance in computer vision. However, almost all of the visual data we generate daily - several human lives worth of it - remains unlabeled and thus out of reach of today’s dominant supervised learning paradigm. This thesis focuses on techniques that steer deep models towards learning generalizable visual patterns without human supervision. Our primary tool in this endeavor is the design of Self-Supervised Learning tasks, i.e., pretext-tasks for which labels do not involve human labor. Besides enabling the learning from large amounts of unlabeled data, we demonstrate how self-supervision can capture relevant patterns that supervised learning largely misses. For example, we design learning tasks that learn deep representations capturing shape from images, motion from video, and 3D pose features from multi-view data. Notably, these tasks’ design follows a common principle: The recognition of data transformations. The strong performance of the learned representations on downstream vision tasks such as classification, segmentation, action recognition, or pose estimation validate this pretext-task design. This thesis also explores the use of Generative Adversarial Networks (GANs) for unsupervised representation learning. Besides leveraging generative adversarial learning to define image transformation for self-supervised learning tasks, we also address training instabilities of GANs through the use of noise. While unsupervised techniques can significantly reduce the burden of supervision, in the end, we still rely on some annotated examples to fine-tune learned representations towards a target task. To improve the learning from scarce or noisy labels, we describe a supervised learning algorithm with improved generalization in these challenging settings.

Learning Interpretable Representations of Images

Attila szabó · june 2019.

Computers represent images with pixels and each pixel contains three numbers for red, green and blue colour values. These numbers are meaningless for humans and they are mostly useless when used directly with classical machine learning techniques like linear classifiers. Interpretable representations are the attributes that humans understand: the colour of the hair, viewpoint of a car or the 3D shape of the object in the scene. Many computer vision tasks can be viewed as learning interpretable representations, for example a supervised classification algorithm directly learns to represent images with their class labels. In this work we aim to learn interpretable representations (or features) indirectly with lower levels of supervision. This approach has the advantage of cost savings on dataset annotations and the flexibility of using the features for multiple follow-up tasks. We made contributions in three main areas: weakly supervised learning, unsupervised learning and 3D reconstruction. In the weakly supervised case we use image pairs as supervision. Each pair shares a common attribute and differs in a varying attribute. We propose a training method that learns to separate the attributes into separate feature vectors. These features then are used for attribute transfer and classification. We also show theoretical results on the ambiguities of the learning task and the ways to avoid degenerate solutions. We show a method for unsupervised representation learning, that separates semantically meaningful concepts. We explain and show ablation studies how the components of our proposed method work: a mixing autoencoder, a generative adversarial net and a classifier. We propose a method for learning single image 3D reconstruction. It is done using only the images, no human annotation, stereo, synthetic renderings or ground truth depth map is needed. We train a generative model that learns the 3D shape distribution and an encoder to reconstruct the 3D shape. For that we exploit the notion of image realism. It means that the 3D reconstruction of the object has to look realistic when it is rendered from different random angles. We prove the efficacy of our method from first principles.

Learning Controllable Representations for Image Synthesis

Qiyang hu · june 2019.

In this thesis, our focus is learning a controllable representation and applying the learned controllable feature representation on images synthesis, video generation, and even 3D reconstruction. We propose different methods to disentangle the feature representation in neural network and analyze the challenges in disentanglement such as reference ambiguity and shortcut problem when using the weak label. We use the disentangled feature representation to transfer attributes between images such as exchanging hairstyle between two face images. Furthermore, we study the problem of how another type of feature, sketch, works in a neural network. The sketch can provide shape and contour of an object such as the silhouette of the side-view face. We leverage the silhouette constraint to improve the 3D face reconstruction from 2D images. The sketch can also provide the moving directions of one object, thus we investigate how one can manipulate the object to follow the trajectory provided by a user sketch. We propose a method to automatically generate video clips from a single image input using the sketch as motion and trajectory guidance to animate the object in that image. We demonstrate the efficiency of our approaches on several synthetic and real datasets.

Beyond Supervised Representation Learning

Mehdi noroozi · jan. 2019.

The complexity of any information processing task is highly dependent on the space where data is represented. Unfortunately, pixel space is not appropriate for the computer vision tasks such as object classification. The traditional computer vision approaches involve a multi-stage pipeline where at first images are transformed to a feature space through a handcrafted function and then consequenced by the solution in the feature space. The challenge with this approach is the complexity of designing handcrafted functions that extract robust features. The deep learning based approaches address this issue by end-to-end training of a neural network for some tasks that lets the network to discover the appropriate representation for the training tasks automatically. It turns out that image classification task on large scale annotated datasets yields a representation transferable to other computer vision tasks. However, supervised representation learning is limited to annotations. In this thesis we study self-supervised representation learning where the goal is to alleviate these limitations by substituting the classification task with pseudo tasks where the labels come for free. We discuss self-supervised learning by solving jigsaw puzzles that uses context as supervisory signal. The rational behind this task is that the network requires to extract features about object parts and their spatial configurations to solve the jigsaw puzzles. We also discuss a method for representation learning that uses an artificial supervisory signal based on counting visual primitives. This supervisory signal is obtained from an equivariance relation. We use two image transformations in the context of counting: scaling and tiling. The first transformation exploits the fact that the number of visual primitives should be invariant to scale. The second transformation allows us to equate the total number of visual primitives in each tile to that in the whole image. The most effective transfer strategy is fine-tuning, which restricts one to use the same model or parts thereof for both pretext and target tasks. We discuss a novel framework for self-supervised learning that overcomes limitations in designing and comparing different tasks, models, and data domains. In particular, our framework decouples the structure of the self-supervised model from the final task-specific finetuned model. Finally, we study the problem of multi-task representation learning. A naive approach to enhance the representation learned by a task is to train the task jointly with other tasks that capture orthogonal attributes. Having a diverse set of auxiliary tasks, imposes challenges on multi-task training from scratch. We propose a framework that allows us to combine arbitrarily different feature spaces into a single deep neural network. We reduce the auxiliary tasks to classification tasks and the multi-task learning to multi-label classification task consequently. Nevertheless, combining multiple representation space without being aware of the target task might be suboptimal. As our second contribution, we show empirically that this is indeed the case and propose to combine multiple tasks after the fine-tuning on the target task.

Motion Deblurring from a Single Image

Meiguang jin · dec. 2018.

With the information explosion, a tremendous amount photos is captured and shared via social media everyday. Technically, a photo requires a finite exposure to accumulate light from the scene. Thus, objects moving during the exposure generate motion blur in a photo. Motion blur is an image degradation that makes visual content less interpretable and is therefore often seen as a nuisance. Although motion blur can be reduced by setting a short exposure time, an insufficient amount of light has to be compensated through increasing the sensor’s sensitivity, which will inevitably bring large amount of sensor noise. Thus this motivates the necessity of removing motion blur computationally. Motion deblurring is an important problem in computer vision and it is challenging due to its ill-posed nature, which means the solution is not well defined. Mathematically, a blurry image caused by uniform motion is formed by the convolution operation between a blur kernel and a latent sharp image. Potentially there are infinite pairs of blur kernel and latent sharp image that can result in the same blurry image. Hence, some prior knowledge or regularization is required to address this problem. Even if the blur kernel is known, restoring the latent sharp image is still difficult as the high frequency information has been removed. Although we can model the uniform motion deblurring problem mathematically, it can only address the camera in-plane translational motion. Practically, motion is more complicated and can be non-uniform. Non-uniform motion blur can come from many sources, camera out-of-plane rotation, scene depth change, object motion and so on. Thus, it is more challenging to remove non-uniform motion blur. In this thesis, our focus is motion blur removal. We aim to address four challenging motion deblurring problems. We start from the noise blind image deblurring scenario where blur kernel is known but the noise level is unknown. We introduce an efficient and robust solution based on a Bayesian framework using a smooth generalization of the 0−1 loss to address this problem. Then we study the blind uniform motion deblurring scenario where both the blur kernel and the latent sharp image are unknown. We explore the relative scale ambiguity between the latent sharp image and blur kernel to address this issue. Moreover, we study the face deblurring problem and introduce a novel deep learning network architecture to solve it. We also address the general motion deblurring problem and particularly we aim at recovering a sequence of 7 frames each depicting some instantaneous motion of the objects in the scene.

Towards a Novel Paradigm in Blind Deconvolution: From Natural to Cartooned Image Statistics

Daniele perrone · july 2015.

In this thesis we study the blind deconvolution problem. Blind deconvolution consists in the estimation of a sharp image and a blur kernel from an observed blurry image. Because the blur model admits several solutions it is necessary to devise an image prior that favors the true blur kernel and sharp image. Recently it has been shown that a class of blind deconvolution formulations and image priors has the no-blur solution as global minimum. Despite this shortcoming, algorithms based on these formulations and priors can successfully solve blind deconvolution. In this thesis we show that a suitable initialization can exploit the non-convexity of the problem and yield the desired solution. Based on these conclusions, we propose a novel “vanilla” algorithm stripped of any enhancement typically used in the literature. Our algorithm, despite its simplicity, is able to compete with the top performers on several datasets. We have also investigated a remarkable behavior of a 1998 algorithm, whose formulation has the no-blur solution as global minimum: even when initialized at the no-blur solution, it converges to the correct solution. We show that this behavior is caused by an apparently insignificant implementation strategy that makes the algorithm no longer minimize the original cost functional. We also demonstrate that this strategy improves the results of our “vanilla” algorithm. Finally, we present a study of image priors for blind deconvolution. We provide experimental evidence supporting the recent belief that a good image prior is one that leads to a good blur estimate rather than being a good natural image statistical model. By focusing the attention on the blur estimation alone, we show that good blur estimates can be obtained even when using images quite different from the true sharp image. This allows using image priors, such as those leading to “cartooned” images, that avoid the no-blur solution. By using an image prior that produces “cartooned” images we achieve state-of-the-art results on different publicly available datasets. We therefore suggests a shift of paradigm in blind deconvolution: from modeling natural image statistics to modeling cartooned image statistics.

New Perspectives on Uncalibrated Photometric Stereo

Thoma papadhimitri · june 2014.

This thesis investigates the problem of 3D reconstruction of a scene from 2D images. In particular, we focus on photometric stereo which is a technique that computes the 3D geometry from at least three images taken from the same viewpoint and under different illumination conditions. When the illumination is unknown (uncalibrated photometric stereo) the problem is ambiguous: different combinations of geometry and illumination can generate the same images. First, we solve the ambiguity by exploiting the Lambertian reflectance maxima. These are points defined on curved surfaces where the normals are parallel to the light direction. Then, we propose a solution that can be computed in closed-form and thus very efficiently. Our algorithm is also very robust and yields always the same estimate regardless of the initial ambiguity. We validate our method on real world experiments and achieve state-of-art results. In this thesis we also solve for the first time the uncalibrated photometric stereo problem under the perspective projection model. We show that unlike in the orthographic case, one can uniquely reconstruct the normals of the object and the lights given only the input images and the camera calibration (focal length and image center). We also propose a very efficient algorithm which we validate on synthetic and real world experiments and show that the proposed technique is a generalization of the orthographic case. Finally, we investigate the uncalibrated photometric stereo problem in the case where the lights are distributed near the scene. In this case we propose an alternating minimization technique which converges quickly and overcomes the limitations of prior work that assumes distant illumination. We show experimentally that adopting a near-light model for real world scenes yields very accurate reconstructions.

Master's Theses

Bachelor's theses.

Computer Graphics Laboratory ETH Zurich

  • Publications
  • Industry Partners

Student Projects

Semester, bachelor and master theses.

We are working on a large variety of topics in the field of computer graphics / machine learning and related areas: Physics-based animation, rendering, geometric modeling, computational materials, computer-aided learning, medical simulations, display technology, as well as image and video-based techniques. If you are looking for a semester, bachelor, or master thesis in our lab, take a look at our current offers listed below ( Master Thesis , Semester/Bachelor Thesis ). Some theses are done in collaboration with Disney Research , ETH Media Technology Center , ETH Game Technology Center and Arbrea Labs , checkout their pages for more projects. Although we try to keep this page up-to-date, other projects might be available. Therefore, if you are generally interested in computer graphics / machine learning and are searching for a project, talk to us. If you are interested in a specific project, feel free to contact the supervisor over the [mailto] links, or if you have ideas, drop by; we are always looking for interested students. To schedule a first meeting where we can discuss multiple relevant projects, please contact the thesis coordinator ( [email protected] ) with your transcript and CV . General guidelines for theses at CGL are available here

Access to the PDF documents is only granted from within ETH network addresses (129.132.*, 195.176.*, 10.5.*, 10.6.*).

Master Thesis

Semester/bachelor thesis.

  • Guidelines for theses at CGL : Guidelines (PDF)
  • Guidelines for theses at CGL are temporarily unavailable. Please contact [email protected] for any question.
  • Thesis Document Template: LaTeX (zip)
  • For help with Latex, please see this list of guides: External Link
  • Thesis Presentation - use this form to request a time slot: Form (Excel)
  • Thesis Presentation - use this form to request a time slot: Google Form

It is still the case that grades have to be formally approved in the "Notenkonferenz" (grade conference) of the department. This implies that master students have to start before a specific date if they want to receive the master degree directly. If they miss one of these dates they have to wait 6 months for the degree after completion of the thesis. However, a receipt of successful completion of the master degree is given by the department if they start later (which is needed for job applications).

o

  • Skip to main navigation
  • Skip to secondary navigation
  • Skip to search
  • Skip to content
  • Accessibility Statement
  • Report Barrier
  • Simple Language
  • Sign Language
  • Internal Area
  • Student Portal
  • Cooperation
  • Research Project
  • Examinations
  • Master modules

How to write a thesis

  • Advisor's Agreement
  • Informationen zu Analyse eines Forschungsthemas

Inhaltsverzeichnis

  • Section 1: Basics for Success
  • Subsection 2.1: Parts of a Student Thesis
  • Subsection 2.2: Structure of the Thesis
  • Subsection 3.1: Use of Fonts
  • Subsection3.2: Page Layout
  • Subsection 3.3: Recurring Elements
  • Subsection 4.1: Citation Style (with Square Brackets)
  • Subsection 4.2: Bibliography
  • Subsection 4.3: Citation Techniques
  • Subsection 5.1: Search Engines (Selection)
  • Subsection 5.2: Techniques
  • Subsection 5.3: Reading a Source I (Papers)
  • Subsection 5.4: Reading a source II (Longer Works)
  • Subsection 5.5: How to rate a source
  • Section 6: Writing Style
  • Section 7: Best Practices
  • Section 8: Time Management
  • Section 9: Hints on the practical part/Implementation
  • Section 10: Procrastination Techniques
  • Subsection 11.1: The two types
  • Subsection 11.2: Question and Answer Session
  • Subsection 11.3: Miscellaneous
  • Section 12: Thesis Evaluation Criteria
  • Section 13: Recommended Literature (German only)

1  Basics for Success

The following factors make a good scientific work:

  • clear problem/objective
  • logical, structured layout accurate
  • handling of terms, plausibility comprehensible reasoning (through clean structuring, argumentation, references, objectivity, etc.)
  • content and formal accuracy
  • systematic approach and critical questioning of results
  • interesting presentation of facts (also through good illustrations, etc.)

back to table of contents

2  Formalities

2.1  parts of a student thesis.

It is best to use the templates (Word and LaTeX) provided by the department, especially to automatically generate the title page, declaration of independence, and all directories. The thesis must be written according to the following structure:

  • Title page with the following contents: name of the university/faculty/institute/department, name of the university professor, type of work, topic, your own name, date and place of birth, name of the supervisor, date of submission
  • Task description: Take over the text of the task description unchanged and in full
  • Declaration of independence
  • Summary/Abstract (half a page each): It must stand alone and provide an overview of the entire work (including results). Do not use or introduce abbreviations! No references to parts of the work!
  • Table of contents: Listing no deeper than level 3
  • Symbol/formula/abbreviation directory (optional): Used symbols and terms can be collected here in one place
  • The actual text of the work : The first chapter (Introduction) begins here and the page numbering. The chapters are numbered decimally (do not use more than 4 levels of structure, better 3)
  • Bibliography
  • List of Figures (optional)
  • List of Tables (optional)
  • Glossary (optional)
  • Appendix (optional, alphabetical numbering of chapters)

2.2  Structure of the Thesis

The IMRAD ( I ntroduction, M ethods, R esults a nd D iscussion) schema is a common standard. Proposed structuring:

Chapter 1: Introduction  Motivate the problem in the application context, restate the task in your own words, and give an overview of the work.

Chapter 2: Related Works  Show who has already dealt with the topic or related topics, what solutions have been described, and what the connection is to your own work.

Chapter 3: Fundamentals  Introduction of mathematical, technical, algorithmic, or other basic knowledge necessary to understand the work.

Chapter 4ff.: Methodology and Implementation  Main part of the work - first describe the concept, then the realization.

Chapter 4ff.+1: Results  Objectively present the results and describe how exactly they were obtained. Draw attention to peculiarities.

Chapter 4ff.+2: Discussion  Discuss the implemented solution based on the results. Understand and explain peculiarities. Based on this, work out the pros and cons (of the developed method). Under certain circumstances, the chapters "Results" and "Discussion" can be summarized in a single chapter.

Chapter 4ff.+3: Summary  Brief summary and evaluation, results/solutions are condensed into a conclusion.

Chapter 4ff.+4: Outlook  The outlook shows meaningful possibilities for further processing the material. Chapters "Summary" and "Outlook" can be summarized in a single chapter under certain circumstances.

 In Chapter  7↓ (Best Practices), you will learn more about the contents of each chapter.

3  Formatting

It is best to use one of the chair templates. You can find these at:  tu-dresden.de/ing/informatik/smt/cgv/studium/materialien

3.1 Use of  Fonts

Body text in serif font creates good readability. Headings can look nice in sans-serif bold font. Recommended font sizes are:

  • Body text: 11pt
  • Headings (h1-h2-h3): 18pt — 14pt — 12pt

Font Styles

Emphasis can be achieved through italics , bold , ALL IN CAPITAL LETTERS, small caps, and font family. However, do not use several AT ONCE ! Underlining is prohibited, bold and capital letters should be used sparingly! For source code, a monospace font is recommended (e.g., Courier New). Free variables and free function names should be italicized, whereas characters with fixed meanings should NOT be italicized - these are well-known functions (e.g., sin/cos, lim), constants (e.g., Euler's number e, the constant π , or user-selected constant symbols), and unit symbols (e.g., m/s, kHz). Mathematical variable names never consist of more than one letter! If more characters are needed for precision, they may appear in subscript (also not italicized). Examples of font formatting in formulas:

  • Incorrect: func(x) = πxmax*sin(2x)
  • Correct: f(x) = πxmax⋅sin(2x)          Note 1

Italicized text can also be used to indicate (self-introduced) technical terms and foreign words and bold can be used to indicate keywords. Furthermore, italics are used when referring to titles of independent work (monographs, books).

Quotation Marks

Use quotation marks “” not to emphasize words, but exclusively to quote text passages or when referring to non-independent literature (articles from conference proceedings or journals, essays, book sections).

3.2  Page Layout

The work should be printed double-sided . Leave margins for notes. A good layout has an outer and inner margin of 2.5cm each, as well as a top margin of 3cm and a bottom margin of 2cm. The top margin leaves room for a 1cm high header, which bears the title of the current chapter and the current page number on the outside. The page numbers begin on the right (odd) page with the introduction. Chapter beginnings are always on a right (odd) page. If necessary, the preceding (left) page remains blank.

3.3  Recurring Elements

are to be labeled according to the schema Fig.␣<Chapter number>.<Sequential number>:␣<Title> . This refers to captions  that are placed below the figure. Find a meaningful title. The figure must be self-explanatory along with its title. Pay attention to high quality, and prefer vector graphics. Also, ensure that each part of the image is large enough and that the captions are large enough for good readability.

are to be labeled according to the scheme  Table␣<Chapter number>.<Sequential number>:␣<Title> . This refers to headings that are placed above the table. As with figures, a meaningful title is important.

Source codes

are to be labeled according to the schema Listing␣<Chapter number>.<Sequential  number>:␣<Title>,␣<Filename> . For short code passages, captions can be used; otherwise, use headings .

for annotations or translations. If the footnote refers to a word, the footnote mark immediately follows it. If it refers to a sentence, it is placed immediately after the period. Use footnotes sparingly. Consider how the content can be incorporated into the text in a meaningful way.

4  Citations

All sources, including texts, images, surveys, links, etc., must be cited. The author and the source of the content (books, papers, slides, web pages, etc.) should be identified in the bibliography. The reader must have a complete overview of the sources used and their origin, especially for non-printed media. Permission from the author is not required.

4.1  Citation Style (with Square Brackets)

A citation is marked in the text to refer to the respective source. Depending on the field of study or type of work, it varies and appears as a numerical reference (IEEE style) or alphanumeric reference (AMS style, authorship trigraph). The following rules should be used in the final thesis (or just use the CGV template):

  • 1 author: the first three letters of the surname + year of publication... e.g. [Mei05]
  • 2 - 4 authors: the initial letters of the surnames (in the order they appear in the paper) + year of publication... e.g. [AB10], [XYZ15], [STUV12]
  • > 4 authors: the initials of the first 3 surnames, then a "+" sign, then the year of publication... e.g. [XYZ+04]
  • If there are multiple works by an author in the same year, lowercase letters are appended to the year... e.g. [Mei05a], [Mei05b]
  • If there are multiple sources for a text passage, they are separated by commas within a square bracket... e.g. [Mei05, XYZ+04]
  • If referring to a specific part of the source, this can be indicated in the reference list or citation bracket by specifying the page number... e.g. [Mei05, p.99].

4.2  Bibliography

In the numerical variant, sources in the bibliography are sorted according to their first appearance in the text. In the alphanumeric variant, they are sorted according to the contents of the bracket. The structure of an entry in the bibliography differs slightly depending on whether it is a conference paper or a book (chapter) (due to the different information to be provided). For example, a conference paper is structured as follows (according to the CGV scheme):

[XYZ99] LastnameInCapitalLetters1, ␣ Firstname1 ␣ ; ␣ Lastname2, ␣ Firstname2 ␣ ; ␣ Lastname3, ␣ Firstname3: Title of the Paper. ␣ In: ␣ Proceedings of the italicized conference on something Vol. X(Y), ␣ Location, ␣ Year, ␣ pp. <PageX-PageY>

When citing sources from the web, always include the URL/link and the date of retrieval. If possible, archive a copy of the internet source. Surveys/interviews are also sources. In this case the following information should be recorded:

[Mei15] ␣ Lastname1, ␣ Firstname1 (Interviewee) ␣ ; ␣ Lastname2, ␣ Firstname2 ␣ (Interviewer): ␣ Title of the Interview. ␣ Telephone/Personal/Written Interview/Conversation/Survey. ␣ Location, ␣ Date, ␣ Time

4.3  Citation Techniques

Exact (direct) quotation  Useful for definitions and statements that could not be described more accurately. Placed in quotation marks if it is not longer than 4 lines. Otherwise, the entire quote block is indented (without quotation marks). The source is placed immediately after the quote in the text (Harvard method). Exact quotes must be honestly and accurately reproduced, without any rewording or distortions of the meaning. Text highlights or errors in the original text must also be reproduced (these can be marked with [sic] - Latin for "thus" or "really so"). Double quotation marks in the quote are replaced with single ones. Omissions are indicated by [...] (make sure that this does not distort the meaning of the original). Adaptations to the original, e.g. grammatical phrasing, should be written directly in square brackets at the relevant point - and also if words are added or highlights are made (write a clear comment in the square brackets, e.g. [emphasis added by the author]). Exact quotes should be used sparingly!

Paraphrased (indirect) quotation  Here, the content of sentences or paragraphs from the original literature is reproduced in the same meaning as in the original text. The strict rules of exact quotation do not apply, but thoughts may not be altered, omitted, or added. In the sentence/paragraph that encompasses the content of the external source, there must be a "according to," "as per," etc. The citation bracket is placed before the period if the paraphrased quote only goes over one sentence. If the paraphrased quote is several sentences long, the citation bracket is placed after the period of the last sentence of the quote.

Figure citation  Figures from external sources must be reproduced unchanged or the changes must be clearly indicated. The source reference belongs at the end of the figure caption (in square brackets). If a foreign illustration was used as a template for your own illustration, it also requires an indication, e.g. "according to [XY01, Fig. X.Y2]."

"Second-hand" quotes  are those in which a source is cited, whose content represents a quote from the actual subject matter. Such quotes should be avoided! An exception is the unavailability of the original source, which is a rare case. A "second-hand" quote must be marked with the note "cited in" e.g. "[MXY+01] cited in [XY01]" (in this case, [MXY+01] would be the original source and [XY01] the cited source). Both works must be listed in the bibliography.

Good style  is to mention the authors of an external source by name - if there are more than two authors, use the form "SurnameOfFirstAuthor et al. " - however, it is also possible to use the citation bracket directly for this purpose. Examples:

Direct quote:

okay : In the study by [MYZ+01], it is described that these are " [...] crucial factors."

better : Meier et al. describe in their study that these are " [...] crucial factors." [MXY+01]

Indirect quote:

okay : According to [MS01], there are various crucial factors.

better : According to Meier and Schmidt, there are various crucial factors [MS01].

not so good : There are various crucial factors for this (see [MS01]).

5  Literature Research

Valuable sources should be used, such as papers from well-known conferences with review systems or those that have been frequently cited. Printed sources are generally more credible and should be preferred over web sources such as forums, tutorials, or Wikipedia. Wikipedia can be a first point of reference, but it is scientifically controversial - it's better to search for "proper" literature from there. General problems include:

  • usually only one paper is given by the advisor
  • lack of overview over the field
  • there is a lot of literature and much of it is poor
  • there is not enough time for exhaustive research.

The strategy is to:

  • read the given paper completely (sometimes multiple times) and write down important technical terms (buzzwords)
  • read related work again to develop a sense of the field
  • examine related works (often it's enough to read the abstract and results/discussion to get an impression)
  • divide literature into relevant current works, overview articles (STARs) as a source pool, and older works (good candidates for backward search).

These resources (STARs, old works, buzzwords, names of major conferences) are the basis for further research. Where should one look?

5.1  Search engines (selection)

  • ACM Digital Library http://dl.acm.org
  • IEEE Xplore Digital Library http://ieeexplore.ieee.org
  • Google Scholar http://scholar.google.com
  • CiteSeer http://citeseerx.ist.psu.edu/index
  • Microsoft Academic Search http://academic.research.microsoft.com/

The SLUB has a subscription to many online portals, so you can download listed papers or books for free. However, this can only be done from the TUD IP address range (Uninetz). If you want to access it from home, you can use  OpenVPN .

5.2  Techniques

Finding new works through older ones  search for the older publication. Then show the works that cite the older work - this option is usually called "Referenced by" / "Cited by". Then research the displayed (new) works.

Finding new works through buzzwords  enter buzzwords in the search engine. Sort by publication date and number of citations. Scan the first hits (possibly new buzzwords will emerge). Possibly search for research groups that are known in the specific field and search their publication directories.

Finding new works through conferences  after finding relevant research areas and keywords, you can search for major conferences in these fields. Look at the lists of publications and then research them in more detail.

Finding new works through well known authors  search for publication lists of authors who are frequently mentioned in the relevant field or whose names frequently appear in the bibliography. Note the order of authorship in the header of a paper. The first-named author is the author (of the largest part) of the work. The far right typically indicates the head of the department/institute/chair as the supervisor of the work.

5.3  Reading a Source I (Papers)

Begin with the Abstract and Results/Discussion section to quickly access the essential information. Ask the following questions about the source being examined:

  • What are the core contributions? (They are usually at the end of the Introduction)
  • What relevance do the core contributions have for your own work?
  • What results from cited publications are relevant to serve as justification for core contributions in other papers? Compile a list of these cited works and follow up on them.
  • What terminologies were introduced for the relevant core contributions? Are these terms interesting for your task? Can you expand your search queries using the new terms?

Don't panic if you don't understand everything immediately: Scientific papers usually contain highly condensed information. Typically, they need to be read multiple times to be fully understood.

5.4  Reading a Source II (Longer Works)

SQ3R method: survey, question, read, repeat, review

  • Survey : Get an overview, study the table of contents: What was covered? How is the text structured? What foundations does the author rely on? What is important for you?
  • Question : Formulate questions about the text - what information do you expect from this text for your own work?
  • Read : Read the relevant chapters for your question.
  • Repeat : It's okay if you don't fully understand everything on your first read of a chapter. Simply repeat what you understood (preferably aloud). If you get stuck: reread and repeat what you understood (preferably aloud). If you get stuck again: reread and...
  • Review : Summarize the content briefly in your own words. Were the questions about the text answered? Are there new questions?

5.5  How to rate a source

How do you know if the found source is useful? Even without specialized knowledge, you should pay attention to the following characteristics:

  • the work is current
  • the "related work" section is extensive
  • the contribution of the work is clearly highlighted in the introduction
  • the work has been cited frequently
  • the work was published in a major conference (if "ACM" or "IEEE" appears in the conference title, this is a good starting point)

6  Writing Style

Always use scientific language - never use everyday language!

Technical terms

Use technical terms, but not to obscure content. The reader must be able to understand them. If unsure, create a glossary. If there is a technical term for something, use it instead of a synonym.

Fill-in phrases

Avoid standard phrases such as e.g.  "as can be easily seen...". Don't use relativisations ("many", "often", "mostly"), exaggerations ("enormous", "incredible"), filler words ("indeed", "well"), reassurance words ("somewhat", "somehow", "probably"), argument replacement words ("of course", "naturally"), or personal opinions. Your own statements are not prohibited, but must be critically reflected upon and justified. Stay humble in your explanations and avoid arrogant formulations (bad example:  "The foundation is trivially provided by the well-known theories of tensor arithmetic" ). Avoid formulations with "one" or "I".

Comprehensibility

Don't write artificially complicated, but as if you were orally explaining a scientific fact to a professor. Write concise, clear sentences that exclude ambiguities and are content-wise informative. Terms must be defined clearly and used in a consistent manner. Also strive for a consistent level of language. Stay logical and never lose the thread. Stay focused on the problem. Write for the reader! Guidelines for comprehensibility:

  • Each sentence contains a statement
  • Each paragraph contains a thought
  • Each section contains a group of thoughts

Bad : It is a well-known problem in computer graphics that this interface limits the possibilities, which is why some data, such as textures, are not held in conventional main memory but are transferred to the graphics card and stored there in graphics memory.

Better : It is a well-known problem in computer graphics that this interface limits the possibilities. Therefore, it is common to store data such as textures in graphics memory instead of conventional main memory. This way, they only have to be transferred to the graphics card once.

Sentence Structure

Use subordinate clauses sparingly - avoid nested or unbroken sentences! Pay attention to clear role distribution: main information in the main clause, subordinate information in the subordinate clause. Eliminate subordinate clauses without (relevant) content. Avoid chains of genitives. Use verbs instead of nouns or auxiliary verb constructions (use "depends on" instead of "there is a dependency" or "is dependent on"). Whenever there is a choice between a verb and something else, choose the verb! Do not use too many prepositional phrases, meaning not more than one preposition ("in, under, over, between, in front of, after, against...") per sentence. Write in a positive sense instead of a negative sense - do not use double negations and write what is and not what is not. Also, avoid using too many passive formulations, but write in an active style.

Abbreviations

Use abbreviations sparingly and always use them unambiguously. Explain them at their first occurrence and create a list of abbreviations. Commonly known abbreviations (according to Oxford English Dictionary, Chicago Manual of Style, etc.) do not need to be included in the list. Do not rely on the reader to remember all abbreviations immediately: if you use formulas, constants, or abbreviations again many pages after their first introduction, explain them again with a brief repetition. For example, write "Here, the value α , which is the rotation angle , is used again to..." even though you introduced the variable α three chapters ago.

Avoid bullet point lists and write continuous text instead. Avoid frequent use of forward or backward references (e.g., "As will be seen in chapter X..." or "As shown in chapter Y...").

Use figures, tables, and diagrams to make complex textual statements more understandable, but avoid figures of trivial things. All figures or tables must be self-explanatory (axis labels, legends, color meanings, units, etc.). The use of a figure in no way makes a textual description obsolete: a) the body text must also be understandable without the figure and b) the figure must not replace the running text. Do not place essential new information solely in the figure (Negative example: The text describes that the effects shown in figure XY occur for these or those reasons. However, the effects themselves are only mentioned in the caption). All figures, tables, and diagrams must be referenced in the text.

The lower numbers ("zero" to "twelve") are normally spelled out in text unless there is a particular emphasis on the numerical size. Units are generally abbreviated without a period (kg, km, h, min...). A narrow non-breaking space is placed between the value and the unit symbol (do not break the line here). Avoid writing unrelated numbers together (Negative example: "256 64-bit registers...").

A consistent style should be used, and a uniform naming convention for variables, function names, etc. should be established. Do not use overloaded symbolism, but still strive for accuracy. Avoid using formulas directly in running text as much as possible, as they disrupt the reading flow and can sometimes sabotage the text layout; instead, use displayed formula environments. For example:

The Pythagorean theorem is a fundamental theorem of (Euclidean) geometry and states that:

a 2  +  b 2  =  c 2

The equation holds for any right triangle, where a and b are its catheti, and c is its hypotenuse.

Do not unnecessarily include complicated and lengthy derivations in the main text. Reduce them to the essential points (and provide additional details in the appendix if needed).

Source code

Should never be included in its entirety in the main text! If necessary, only selected portions should be included due to special circumstances. Instead, explain the developed procedures using structure and flow diagrams or with pseudocode.

7  Best Practices

In the following, you will find tips for the chapters presented according to the IMRAD ( I ntroduction, M ethods, R esults a nd D iscussion) scheme.

Introduction

At the very beginning of the written work, the introduction should provide a concise motivation for the task and elegantly introduce the topic. Here, the problem is presented in the context of an application, and the content of the work is briefly previewed. What problem was solved, why is it relevant? What is the approach? What was thematically limited or excluded? It is important to highlight your own contribution in a few concise sentences. (The conclusion should refer back to these contributions at the end to give the work a narrative bracket.) The introduction ends with an overview of the work — here, the contents of the individual chapters are briefly described (avoid trivial statements such as "In the results chapter 7, the results will be presented"). Hint : Avoid standard intros like "XY is an important field of application in computer graphics" or "XY is indispensable in computer graphics."

Related work

It should be shown who has already dealt with the topic or similar related topics, what solution approaches have been described and what the connection of the respective work to one's own is. Keep the " 4 Questions " in mind as a mnemonic: What problem was tackled? How was the problem solved? What did it bring? How does it relate to your own work? In this chapter, it is particularly difficult to create a red thread and prevent the text from becoming a list of papers. Strategies that can be combined are available: chronological or aspect-oriented. In the chronological listing, related works are described in chronological order, giving a historical overview of the solution approaches to the problem. Typically, the first source in time is described in more detail, as well as the sources that follow more closely in time. Finally, the current state should be examined in more detail. The second strategy, aspect-oriented citation, involves dividing the papers into aspects of one's own problem. For example, if the topic is volume rendering with global illumination, papers on volume rendering in general should be presented first, then sources on advanced methods, and finally papers on the integration of global illumination, possibly even in separate sections.

Mathematical, technical, algorithmic, and other knowledge should be explained here, but only as much as is needed to understand the work. The author's own level of knowledge before starting the work can be considered as prior knowledge. More advanced basic knowledge or detailed mathematical derivations can be moved to the appendix. The work is not a textbook. Explanations of the technical terms used may be given here (or at the beginning of the methodology chapter).

Methodology and Implementation

Here, the author's own work is described conceptually, including problem analysis and solution search/finding. At this point, a theoretical examination of the material should take place — do not explain using concrete APIs or source code (pseudocode, however, is allowed). Only after that comes the description of the implementation — in most cases, the implementation as software. Pay attention to a clear and problem-oriented selection of code examples or (partial) class diagrams. Detailed and comprehensive presentations should be moved to the appendix if necessary.

Here, an objective presentation of the results is made. A division into quantitative evaluation (generation of measurement data) and qualitative evaluation (surveys/expert feedback/description of peculiarities) is useful. For each evaluated issue, the result (e.g., as a figure or table) should be shown first, then described, and only then interpreted. Evaluation should not yet take place.

Only in the discussion section are the results to be critically questioned and assessed. Based on this, the pros and cons (of the developed method) are worked out.

The work is briefly summarized and evaluated, the results/solutions are condensed in a conclusion (making a connection to the problem statements raised in the introduction). An assessment of the general usefulness of the developed methods can be given. Hint : End on a positive note! In the conclusion, show the "good" things first, then the "bad" ones. For the latter, note that they are solvable and that the developed methods are nevertheless promising.

The outlook presents sensible extensions of the developed methods or possibilities for further research. Here, current weaknesses should be explained as opportunities for new concepts.

At the beginning of a main chapter, an overview of the following subchapters can be given. At the end of a main chapter, its content can be summarized and linked to the next main chapter. Anything that could hinder the reading flow (such as extensive tables, figures, mathematical derivations, or source code) should be moved to the appendix. Only really important snippets of source code should be included in the main part. It is better to avoid it and explain the underlying concepts/algorithms.

8  Time Management

Figure 1 Schedule for a Bachelor's Thesis..

Create a schedule and stick to it (for example, as shown in Figure 1). Also plan buffer time for unforeseeable events and don't let holidays, such as Christmas and New Year's, or exam periods catch you off guard. The work should be evenly distributed throughout the week. A day off from work is important! Make daily to-do lists and plan enough breaks (more than 5 hours of concentrated work per day is hardly possible). Especially plan the written work in small manageable steps and in the evening try to complete one item from the list for the following day.

Start work at a set time every day, whether you feel like it or not! Don't forget the breaks (recommended are 15 minutes of break after 45 minutes of work).

Consider your biorhythm and don't schedule important tasks during times when you are in a "tired" phase (instead, do simple tasks: organize literature, take care of the household or relax...). Remember to engage in daily physical activity — it also promotes mental performance.

9  Hints on the practical part/implementation

Especially in Bachelor's theses, you have a very limited amount of time for implementation. Therefore, focus on the essential core of your implementation. You don't need to reinvent the wheel, instead use existing software components and frameworks (e.g. for event handling, I/O operations, etc.). Your advisor can surely give you good hints, so that you don't have to develop your software from scratch. Discuss explicitly with your advisor the scope of functionality that your developed components should provide, and tackle these tasks first. If you realize during the course of your work that you still have time for further functionality, this will be welcome and credited to you as a bonus.

When implementing, always keep the KISS principle ( "Keep it simple and stupid" or "Keep it small and simple") in mind. Try to find the simplest implementation for the given problem. In general, a mature software product is not expected, a functional prototype is completely sufficient. However, don't skimp on comments in the source code and adequate documentation. Good program structure and software design are also essential. This helps everyone who wants to reuse your software later — for example, for follow-up theses, in teaching, etc. Perhaps you yourself may want to reuse your own software later and will be pleased to find that all functionalities are well-structured and explained.

10  Procrastination Techniques

Many students invest too much time in the implementation. Reasons for this are sometimes "perfectionist thoughts" or the desire to include even more program features in the software. Unfortunately, there is also the problem that some prefer to engage in intensive implementation work to avoid the written work — possibly because they are not sure how to "get started". Most likely, they simply lack experience in planning and implementing such a large project.

The phenomenon of procrastination is not unknown and is also referred to as "student syndrome" among other things. If you yourself are plagued by such thoughts or feel unable or only idle to plan your activities, contact your supervisor in good time and develop a concept for work distribution together. Be aware: you are certainly not the first person to experience writer's block, and help is available!

11  Presentation Guidelines

11.1  the two types.

Intermediate presentation  In this presentation, students should particularly show their current progress and receive feedback from the audience. Suggestions may arise that can improve the implementation of the task. The presentation should include an introduction to the topic and the presentation of related work to give the audience an overview of the topic. Then, the chosen approach should be justified, implementations shown, and current results presented. A slide showing a list of completed and uncompleted parts of the task should not be missing. For the unfinished tasks, a realistic estimation should be made with the help of a timetable of whether and how they can be completed on time.

Defense  It is not just a summary of the contents produced. Due to the limited speaking time, students should demonstrate that they are able to select important content and leave out unimportant content (or only mention it on the sidelines). Above all, their own contributions should be emphasized. Try to present your knowledge in a way that is as understandable as possible. Unlike you, the listeners do not know every detail of the work and do not want to be informed about every little problem that occurred during the processing time. Start with a good motivation and introduce the task. Be creative and catch the interest of your listeners! This also includes explicitly listing the challenges (i.e., why is your problem a problem) once again. This can be done through a single slide entitled "Challenges." This is followed by a brief outline to provide a roadmap for the presentation. Then, related work and technical foundations should be presented first. These should be kept as brief as possible. A defense is not a lecture! In addition, many things have already been discussed in the intermediate presentation. The core of the work, their own achievements, should occupy most of the presentation time. For particularly extensive work, it may sometimes be necessary to discuss only one part in detail and present the remaining parts in an overview. Always keep the thread and consider carefully what knowledge the listener needs when to follow the presentation, thus avoiding duplicate explanations. Depending on the nature of the work, it can be helpful to start with an overall view (big picture) before explaining finer structures. This gives the listener a guide. Usually, results from measurements or surveys are presented and discussed at the end. The presentation ends with a brief conclusion in which they can emphasize their own achievement again and refer to the task statement/introduction. Here, you can tie things together and show the "Challenges" slide once again. This time, in addition to each sub-problem, briefly summarize your presented solution(s). This is followed by a demonstration of the developed application and the obligatory question and answer session.

11.2  Question and Answer session

Imagine the question and answer session as an opportunity to present more in-depth knowledge. Some questions may be directed towards you as an "expert" and may be quite detailed, while others may be simpler in nature and serve as an indication that parts of your presentation were not understood. React calmly and professionally in the latter case. Under no circumstances should you respond rudely or arrogantly — difficulties in understanding often come from the structure of the presentation (especially if many details are discussed but an overview is lacking). Do not view expert questions as a personal attack and try to answer as objectively as possible. This is not always easy, as you have spent a long time working on the task and may have become very attached to it. Try to approach the situation from a distance and do not justify your actions, especially if your approach is being questioned. Instead, allow for alternative solutions and provide a comparative assessment of your own approach.

11.3  Miscellaneous

Time Limit  The given time limit is strict and must be strictly adhered to. During your presentation, the last 5 minutes of presentation time will be discreetly displayed to you. If you are about to exceed the time limit, you will be informed verbally. At this point, you should definitely wrap up the presentation, otherwise deductions in the evaluation may occur.

Clothes  Dress appropriately for the presentations. However, you don't need to wear a suit. For defending a thesis or dissertation, a dark pair of long pants and a shirt or a plain sweater is sufficient. T-shirts with prints or casual pants can make you appear less credible as a presenter.

Performance  Be as relaxed and confident as possible. Try not to appear too stiff (but also not too hyper). Use speech pace and accentuation to guide the presentation and direct the listener's attention.

Practice  Practice your presentation beforehand — for example, in front of a mirror or with good friends. This way, you can already get some initial feedback on its comprehensibility. You will also learn to assess whether the presentation fits within the time limit. Practicing the presentation is also a great way to train a relaxed attitude and speaking style.

Slides  Don't write everything you say on the slides. There is a great risk that it will appear as if you are simply reading off the slides. Instead, create good illustrations and diagrams. Slides support the presentation in this way much more effectively, as concepts are often more understandable and tangible when presented graphically. Avoid unnecessary text in your presentation.

Backup Slides  If you had to remove detailed information from the presentation slides due to time constraints, don't hesitate to collect them as an appendix. In the Q&A session, the extra slides can help with your explanations and also show that you have a deep understanding of the subject matter.

References in Slides  Adopted illustrations and descriptions of related work must be correctly marked as citations, e.g. with square brackets. Use the same abbreviations as in your written work. Create a slide with the references used in the presentation for the appendix, but do not show this slide during the presentation.

Intermediate Questions  Generally, be open to spontaneous questions, but don't waste valuable speaking time. Try to return to the presentation as quickly as possible and refer to the Q&A session for questions that would take too long to answer.

12  Thesis Evaluation Criteria

The Chair of Computer Graphics and Visualization pays attention to the following points in their assessments for final theses:

  • How extensive, how difficult, and how innovative is the work?
  • How much prior knowledge has the student brought in through lectures, exercises, or seminars?
  • Were the objectives of the task achieved, were objectives changed, were objectives expanded?
  • How is the student's working method with regard to goal orientation, prudence, systematicity, independence, and ability to discuss?
  • How is the written work regarding systematic structure, literature review, classification of the student's own work, comprehensibility, clarity, completeness, text and presentation quality?
  • How is the practical implementation regarding scope/effort, software design, completeness, stability, correctness, documentation, use of libraries and frameworks?

13  Recommended Literature (German only)

  • Kornmeier, Martin: Wissenschaftlich schreiben leicht gemacht für Bachelor, Master und Dissertation. UTB (Haupt), 2013
  • Theuerkauf, Judith: Schreiben im Ingenieurstudium: Effektiv und effizient zu Bachelor-, Master- und Doktorarbeit. UTB Verlag, 2012
  • Ebel, Hans Friedrich: Bachelor-, Master- und Doktorarbeit: Anleitungen für den naturwissenschaftlich-technischen Nachwuchs. Wiley-VCH Verlag GmbH & Co. KGaA, 2009
  • Hohmann, Sandra: Wissenschaftliches Arbeiten für Naturwissenschaftler, Ingenieure und Mathematiker. Springer Vieweg, 2014
  • Balzert, Helmut ; Schäfer, Christian ; Schröder, Marion ; Kern, Uwe: Wissenschaftliches Arbeiten - Wissenschaft, Quellen, Artefakte, Organisation, Präsentation. W3l Verlag, 2008
  • Rechenberg, Peter: Technisches Schreiben: (nicht nur) für Informatiker. Carl Hanser Verlag GmbH & Co. KG, 2003
  • Prevezanos, Christoph: Technisches Schreiben: Für Informatiker, Akademiker, Techniker und den Berufsalltag. Carl Hanser Verlag GmbH & Co. KG, 2013
  • Weissgerber, Monika: Schreiben in technischen Berufen: Der Ratgeber für Ingenieure und Techniker: Berichte, Dokumentationen, Präsentationen, Fachartikel, Schulungsunterlagen. Publicis Publishing, 2010

Note: The examples for formatting formulas may be difficult to read on your browser due to the TU Dresden corporate design. We apologise for this and recommend the PDF version  (German only).

About this page

  • Share on Facebook
  • Share on Twitter/X
  • Send Page Via Email
  • Startpage of TUB
  • Electrical Engineering and Computer Science
  • Dept. of Computer Engineering and Microelectronics

Suchfunktion

Kontakt, inhaltsverzeichnis und weitere service -links, tub - login.

  • with Password
  • with Campus Card

Dept. of Computer Engineering and Microelectronics Computer Graphics

our cg logo

  • Gaëlle Fer-Arslan
  • Dimitris Bogiokas
  • Tobias Djuren
  • Ugo Finnendahl
  • Maximilian Kohlbrenner
  • Hendrik Meyer
  • Markus Worchel
  • Wissenschaftliches Rechnen [WiSe 2023/2024]
  • Computer Graphics 1 [WiSe 2023/2024]
  • Computer Graphics Project [WiSe 2023/2024]
  • Computer Graphics Seminar A [WiSe 2023/2024]
  • Klassische Algorithmen der Computer-Graphik [WiSe 2023/2024]
  • Computer Graphics Colloquium
  • Computer Graphics 2 [SoSe 2023]
  • Computer Graphics Project [SoSe 2023]
  • Computer Graphics Seminar B [SoSe 2023]
  • Klassische Algorithmen der Computer-Graphik [SoSe 2023]
  • Appearance of Real World Objects
  • Triangulations
  • Sketch-based Modeling and Retrieval
  • Direct Digital Manufacturing
  • Thesis ideas
  • Job Openings
  • Publications

Preview(opens in a new tab)

Abschlussarbeiten am Fachbereich CG

Der Fachbereich freut sich über Interessenten und hat jederzeit eine Vielzahl an spannenden Forschungsfragen anzubieten, die mit den Wissenschaftlichen Mitarbeitern oder Prof. Dr. Alexa besprochen werden können.

Allerdings haben wir uns zur Qualitätssicherung der Betreuung angesichts der vielen Abschlussarbeiten in CG und damit letztendlich zum Schutz der Kandidaten vor Misserfolgen folgende Voraussetzungen für die Kandidaten auferlegt:

  • Mindestens mit gut abgeschlossene Teilnahme an CG1 oder einer vom Fachgebietsleiter CG als gleichwertig anerkannten Lehrveranstaltung der TU bzw. einer anderen Hochschule.
  • Mindestens mit gut abgeschlossene Teilnahme an einer weiterführenden Lehrveranstaltung des Fachgebiets CG. Diese Lehrveranstaltung kann nicht durch Lehrveranstaltungen anderer Fachgebiete an der TU oder anderer Hochschulen ersetzt werden.

Thesis & Project Topics

List of topics ideas:, current topics, photorealistic rendering for master students, thesis topics for talented computer graphics students.

  • Contact: Tomáš Iser

Dear Master students, have you successfully passed or are you currently studying the following courses?

  • NPGR010 – Advanced 3D Graphics for Movies and Games
  • NPFL138 – Deep Learning

Then we will be happy if you contact Tomáš Iser and we can discuss thesis topics with you concerning photorealistic rendering!

Inverse erosion simulation for optimal object wrapping in 3D printing

Inverse sandblasting for fun and profit.

  • Contact: Thomas Nindel
  • Keywords: 3D Printing • Erosion simulation • Master Thesis • Particle system • Software Project
  • Appearance Fabrication

After printing an object using a Polyjet 3D printer, postprocessing is applyed to create the final surface finish. Sandblasting and tumbling are common postprocessing techniques. In order to not “eat” into the object geometry during this polishing, the printer can add a padding layer around the object. However, due to the object geometry, the abrasive processes removes material in a non-uniform way.

The goal of this thesis is to use standard erosion simulation techniques to find spatially varying, optimal object wraps, such that, after a certain amount of abrasion, the resulting object exactly matches the specified measurements.

Microstructure 3D Printing

Towards steerable surface reflectance.

  • Contact: Tobias Rittig
  • Keywords: 3D Printing • Master Thesis

The surface finish greatly impacts the appearance of an object. If it is smooth, light is reflected almost mirror-like whereas roughening surfaces lets them appear more glossy and eventually completely matte. Current 3D printing techniques achieve such high resolutions, that it might become possible to influence the surface roughness and thus the directionally dependent reflectance.

Luongo et al. [2019]  demonstrated promising results in their paper on a  SLA  printer . They encoded directional information in the surface by overlaying it with a random noise pattern that was informed by a model of the curing process inside the 3D printer.

We would like to get a similar understanding about our  Prusa SL1  printer and want to extend the amount of control one has over the surface reflectance. In particular, we want to know how subsurface structures filled with air could affect the directionality of the reflectance? Can multi-material printing allow for more variety in the effects one can replicate on a single surface together?

Past Topics

Mobile app for object detection in video, discover the objects in museum virtual tour.

  • Contact: Elena Šikudová
  • Keywords: Bachelor Thesis • Computer Vision • Deep Learning • ISP (NPRG045) • Mobile app • taken

Process a video stream on a mobile phone to detect objects in a museum. Identification is possible through a lightweight neural network. The model should offer sufficient accuracy and speed in recognizing different types of exhibits (size, material) in diverse conditions (lighting, location, background, viewing angles). At the same time, it should consider the limitations of the mobile device, particularly the limited computing power, memory, and battery capacity.

Illustration taken from https://viso.ai/wp-content/uploads/2022/06/mediapipe-object_tracking_android_gpu.gif https://i.giphy.com/uULru6cnBO4gM.webp

Weather & cloud classification from webcam images

What clouds are we looking at.

  • Keywords: Bachelor Thesis • Deep Learning • Image Processing • ISP (NPRG045) • sky • taken
  • Sky Modelling

Weather webcams continuously take pictures of the sky and landscape for meteorologists and the general public to get an impression of the current weather situation. They are a great tool to verify the forecast and see the local deviation.

For this project we would like to classify the types of clouds that are visible in the images and what the weather situation currently is. Is it sunny? Are we seeing rain clouds? You will be using machine learning (eg. auto-encoders) and dimensionality reduction techniques (eg. t-SNE, PCA) to find clusters in the images. These groupings mean that similar clouds / weather conditions are depicted in the images. You will look at self-supervised techniques in order to minimize the amount of manual labelling necessary.

We have a large collection (16+ million) of webcam images from the Czech Meteorological Service (CHMI) that covers 98 locations over 18+ months in 5 minute intervals. This dataset can be a valuable asset to the research community, if there is proper annotation and meta-data for each image available. Your thesis will contribute to this list of additional knowledge we have over the images and help researchers to train better models with this data in the future.

Camera model for light-dark-adaptation of the human eye

  • Keywords: Bachelor Thesis • Global Illumination • taken

In architecture visualization, physically-based rendering allows for the accurate prediction of the irradiance levels in different parts of the building. This helps architects, for example, to maximize the use of natural light in their designs. Current rendering systems, however, do not model the dynamics of the human visual systems when it comes to light-dark-adaptation. This is important in the design of areas with brightness transitions, like entrance areas and hallways.

For example, consider a highway tunnel: To allow for a more graceful brightness-adaptation when entering, tunnel lights are more powerful around the entrance than they are further in. The goal of this thesis is the design and implementation of a physiologically correct camera model for light-dark adaptation.

Generating textures with a GAN

Can gans learn to generate good textures via differentiable rendering.

  • Contact: Martin Mirbauer
  • Keywords: Bachelor Thesis • Deep Learning • Machine Learning • optimisation • taken • texture
  • AI for Content Creation

Differentiable/inverse rendering can find input parameters such as camera position, object’s shape, or its texture from a target image. Using a simple differential rasteriser, available e.g. in PyTorch3D, the goal is to train an image-based Generative adversarial network (GAN) to produce textures, which (after applying to a known object shape and rendering) produce plausible appearance of the object. The resulting GAN+rasteriser network can be trained on a large dataset of textured 3D models of furniture.

Ultimately, the network should be able to create a texture for a 3D model that does not have a texture nor its mapping to the 3D object’s surface – for this an existing unwrapping tool will be used.

(intended as an implementation+experimental thesis)

HDR image segmentation

Where's the sky.

  • Keywords: Deep Learning • HDR • ISP (NPRG045) • taken

Task: Build a modular system that takes a big resolution HDR image and semantically segments it. Already existing networks can be modified and used. The number of semantic classes must include but is not limited to sky (clouds possibly), buildings, vegetation. Preferred tools: Python or Matlab

Environment Map Capture

Hack a 360 degree camera.

  • Keywords: app development • camera • hacking • hardware • ISP (NPRG045) • sky • taken

In Rendering spherical (360°), high dynamic range ( HDR ) images are used as backgrounds and for lighting 3D objects with a realistic light source. For most cases, outdoor captures are used to mimic a realistic sky and sun illumination.

Traditionally, a capture setup for these images consists of a heavy tripod with a panoramic head that can rotate a high-end DSLR around its central point. This gear allows for capturing several pictures in different directions with several exposures that are all taken from one single point. Later in post-processing step, these get stitched to a single panoramic and HDR image. We possess such a setup and use it frequently to capture images of clouds.

Unfortunately all this gear is very heavy and bulky to carry around. We are looking for a more portable solution, that can be setup quickly and delivers not as precise, but reasonable images. For this we bought a state-of-the-art, 360°, pocket camera that is easy to setup and can be controled wirelessly. The factory app does not allow for an easy capture of HDR images though, which is why we started looking for a custom software solution. Initial tests on reverse-engineering the communication protocol showed it is possible to communicate with the camera using a few tricks.

We would like to develop a platform-independent (mobile/web) app that can talk to the camera and capture time lapses as well as exposure-varying sequences. This would allow for the camera to be taken on daily trips and capture environment images wherever you are in the background. This data is supporting machine-learning efforts in our other sky related projects. This project is intended as an individual software project (NPRG045).

Document analysis

Cut the pdf.

  • Keywords: Bachelor Thesis • Image analysis • ISP (NPRG045) • OCR • taken

Task: Build a modular system that takes a PDF of a scanned journal, extracts pictorial and textual data, performs an analysis of the various data types, and saves the results for later statistical analysis. Preferred tools: Python or Matlab

Intelligent tilt-shift transform

Create the little world.

  • Keywords: Deep Learning • GIMP • Image Processing • ISP (NPRG045) • taken

Apply an intelligent tilt-sift transform on images to get a realistic picture of “the little world”. Use DL for depth estimation and apply blur filter accordingly. Standalone app or GIMP plugin.

Optic disc detection

Eye is the window to the disease.

  • Keywords: Bachelor Thesis • Deep Learning • ISP (NPRG045) • Master Thesis • taken
  • Medical Imaging

Detect optic disc in retinal images. Use CV methods, compare them with deep learning results.

Eye tracking vs. deep net activation

Do the nets see what we see.

  • Keywords: Bachelor Thesis • Deep Learning • ISP (NPRG045) • taken

Is there a difference in the visual activation in humans and in deep networks when selecting the category of an object?

  • Faculty of Informatics
  • Institute of Visual Computing & Human-Centered Technology
  • Research Unit of Computer Graphics

Bachelor Thesis

Teaser

  • Currently open thesis topics
  • Former bachelor theses of our group
  • Faculty guidelines + template for bachelor theses
  • How to write a scientific text
  • Theses at VRVis

A bachelor thesis concludes the bachelor studies. It consists of a practical part and a written thesis. The procedure is as follows: 

  • Find a topic and an advisor: open thesis topics and information how to find an advisor if no topic is suitable can be found here . You also have the possibility to do an industry-relevant thesis at the VRVis Research Center . To get an idea what to expect, look at former completed theses here .
  • Submit a workplan for the bachelor thesis, including an approximate time plan . You agree with your supervisor on a final submission deadline . The supervisor then needs to register the thesis in TISS.
  • Have regular meetings with your supervisor to discuss the progress and open issues. 
  • Implementation : in most theses, a piece of software implementing a new algorithm will be created. Use the research unit's GitLab or the TU Wien GitLab and share the repository at least with your supervisor. 
  • Generate results (benchmarks, screenshots, videos, ...)
  • The length of the thesis should be between 15 and 25 pages.
  • The structure should be like a thesis (i.e., containing introduction, state-of-the-art, description of method, results, conclusions, and references). 
  • Use the LaTeX template by the faculty.
  • There are hints on writing the thesis text.
  • Be sure to follow the Code of Ethics .
  • Present your thesis results in a regular meeting of our research unit (e.g., the "VisWB"). 
  • Graduate your bachelor studies: see faculty information . 

Breadcrumbs

TU Wien Institute of Visual Computing & Human-Centered Technology Favoritenstr. 9-11 / E193-02 A-1040 Vienna Austria - Europe

Tel. +43-1-58801-193201

How to find us

Google Custom Search

We use Google for our search. By clicking on „enable search“ you enable the search box and accept our terms of use.

Information on the use of Google Search

  • TUM School of Computation, Information and Technology
  • Technical University of Munich

Technical University of Munich

Thesis and Completing your Studies in Informatics

General information on your thesis.

  • For tips on finding a topic and completing the thesis, please come to the information event Let's talk about - Final Thesis @in.tum
  • It is mandatory to be enrolled while writing your thesis.
  • For information on writing guidelines, formatting, extension, submission, and visibility of registration and submission, please see thesis in detail .

Thesis in detail

Possible examiners.

All examiners of the Computer Science and Computer Engineering Department at the School of Computation, Information and Technology can supervise Bachelor's and Master's theses. In addition, all affiliate members of the former Department of Informatics can be examiner.

You can find out which department an examiner belongs to in TUMonline. Here you will find lists for professorships and chairs:

Chairs and Professorships in Computer Science Chairs in Computer Engineering Professorships in Computer Engineering

Formatting instructions

  • No handwriting (apart from date and signature at the second page)
  • Cardstock (no transparent film, no plastic cover)
  • Hardback (no spiral binding)
  • Technische Universität München or Technical University of Munich
  • School of Computation, Information and Technology - Informatics
  • Master's Thesis in | Bachelor's Thesis in Informatics | Informatics: Games Engineering | Information Systems | Biomedical Computing | Data Engineering and Analytics ...
  • Thesis title (in the language of the thesis). Please note: If the title is different from the title you have registered, the new title must be confirmed by your examiner.
  • First and last name of the author.
  • The TUM and departmental logos are optional.
  • Please note:  The TUM and departmental logos are optional. Do not attach any notes, images and company name or other logos. No company name or company logo. 
  • Author and shortened title, Imprinted or permanently fixed (f.e. glue on the text with wide tape).

First page:

  • Repeat the cover information. 
  • The title must be written in English as well as German.
  • Examiner:  The first and last names of the supervisor including the academic title
  • Supervisor/s:  The first and last names of the advisor/s including the academic title
  • the actual submission date
  • or the submission deadline (15th of the month)
  • Please note:  Do not include your student registration number or other personal data such as date of birth. Do not attach any notes, images and company name or logo.

Second page:

  • Include the following declaration: I confirm that this bachelor's thesis | master's thesis is my own work and I have documented all sources and material used.
  • Handwritten signature and date of signature  (date may also be handwritten).

Formatting (examples and template)

Example of cover

LaTeX template

There is a (non-official) LaTeX template , but the previously mentioned formatting instructions should be checked.

Code of Conduct

Please respect the  Student Code of Conduct .

TUM Writing Guidelines

  • TUM The Use of English in Thesis Titels at TUM

TUM Handout Theses

The latest version of the handout can be found on the central website Downloads - Teaching and Quality Management .

Registration

Bachelor's thesis / master´s thesis.

From 15 January 2024 , all final theses in the School of Computation, Information and Technology will be managed via the CIT portal.

Once you have found a topic and a supervising chair for your thesis, you will be registered by the supervising chair . You will receive an e-mail asking you to confirm your thesis registration. Only after you have confirmed your registration the Academic Programmes Office will be able to check the admission requirements and you will receive an email confirming your binding registration for your thesis.

For more information, see  Thesis and Competing your Studies

Deadline for submission of Bachelor´s Thesis is is four months later at the latest (Bachelor Informatics and Informatics: Games Engineering) resp. five months later at the latest (Bachelor Information Systems)

Deadline for submission of Master´s Thesis is six months later at the latest. Other deadlines apply for  part-time students .

Visibility in TUMonline (Registration and Submission)

How can I see in TUMonline that my thesis is registered?

On your personal overview page in TUMonline, you will find the application "Student Files" in the section "Studies and Courses". If you click on it, you will get to your student file, where there is also a tab "Degrees". In the lower part of this tab you will find your thesis, if you are already registered. If you hold the cursor over the orange dot, your submission date will be displayed.

Alternatively, you can go to your "Curriculum Support" via the tile "Study Status/Curriculum". If you expand the entry "Bachelor'sThesis" with the "Plus" twice, you will see a note that the thesis is registered.

When we have received your thesis and registered your submission, this will also be displayed there.

Submission and Extension for theses starting before 15.01.2024

The subsequent regulations apply only to students in the following degree programs:

  • Bachelor: Informatics, Informatics: Games Engineering, Information Systems
  • Master: Biomedical Computing, Data Engineering and Analytics, Informatics, Informatics: Games Engineering, Information Systems,

Onsite submission - Only for theses starting before 15.01.2024

Next Submission Date: Wednesday, May 15th, 2024, 10 a.m. - 1 p.m., room 00.10.033

The submission of final theses is possible on every 15th of a month, or on the next following working day at the times indicated above, in room 00.10.033.

Submission by post - Only for theses starting before 15.01.2024

If it is not possible for you to hand in your final thesis onsite, you will have to send it by post.

Please mail one copy of your thesis by post to:

Technische Universität München Servicebüro Studium Informatik, SB-S-IN Boltzmannstr. 3 D-85748 Garching

The date of the postmark counts.

Submission regulations - Only for theses starting before 15.01.2024

  • the thesis must be submitted on the 15th of the month, or the next working day if the 15th is a Saturday, Sunday, or public holiday
  • early submission is possible
  • Submit one paper copy at the Academic Programs Office - Informatics
  • Submit one copy to your examiner (please clarify with your examiner how to submit this copy)
  • You may also wish to give one copy to the departmental library and one copy to the supervisor, but this is not compulsory.

If a final thesis has been approved as a group paper in consultation with the examiner in the sense of §18 Para. 2 of the APSO, each author must nevertheless submit a separate copy of the thesis with his or her own affidavit. The individual assignment of the examination performance to be evaluated should be clearly evident from the work. On the cover page and the spine only the name of the student submitting this copy should be written. On the first page, all authors can be listed under "Authors".

If you are unable to meet the submission deadline of your thesis for valid reasons for which you are not responsible, you can submit an application for an extension of the thesis to the Examination Board.

The application must be submitted immediately to the secretary responsible for your degree program, by email from your TUM account. If possible, please fill in and sign the form digitally. Medical certificates also have to be submitted in original (by post). The processing time after submission of all documents is usually two weeks, and you will be notified of the Examination Board's decision.

Applications can generally be divided into two categories:

1) Health reasons

If you are ill and can prove by a certificate that you are prevented from working on your thesis, the processing time is suspended. In the Department of Informatics, this is represented by an extension of your submission deadline. A medical certificate must be enclosed with the application in original. A certificate of incapacity for work is not sufficient. You can find the requirements for a certificate on the website  Withdrawing from Examinations – Medical Certificates .

Request for an Extension for Health Reasons

2) Other reasons

In cases of delays due to other valid reasons for which you are not responsible, the submission deadline of your thesis may be extended in exceptional and particularly justified cases in agreement with the thesis examiner and with the approval of the Examination Board. Please enclose a detailed justification (if possible with supporting documents) with your application.

Request for an Extension for Other Reasons

Completing your Studies

Release of final certificate.

Please contact the secretary of the Examination Board of your study program via e-mail for the release of your Bachelor's degree documents when all grades have been entered and validly set. For enrollment in consecutive Master's programs in Informatics at TUM, the following applies: From mid-September and mid-March respectively, the graduation will be reported directly to the Enrollment Office after the release of the transcripts, so that the graduation documents are no longer necessary for enrollment.

Graduation documents and preliminary certificates

Please be aware that graduation documents and preliminary certificates can only be issued when all grades in TUMonline (including the thesis) are validated. Certificates for students of the Faculty for Informatics  are issued by the  Graduation Office and Academic Records Campus Garching  exclusively,  after approval by the Examination Board . Please contact the  Secretary of the Examination Board of your study program  as soon as all your grades are validated. (Responsible Secretary of the Examination Board: see section "Contact" on the webpage for your study program)

Transition Bachelor – Master

If you enroll for a consecutive Master's program at the Department of Informatics after your Bachelor's degree, we will forward your bachelor's degree to the Admissions and Enrollment Office for enrollment (not for the application!). The graduation documents are therefore not necessary for enrollment. A green checkmark will then appear in the online application portal for your degree certificate and diploma. Please note that it may take a few days until the documents are updated in the portal. If you do not see these two green check marks 1 week before the enrollment deadline, please contact the secretary of the examination board as soon as possible.

Please find more information under  graduation .

Purdue e-Pubs

Home > TECH > CGT > CGT Masters Theses

Department of Computer Graphics Technology Degree Theses

“ The Department of Computer Graphics Technology (CGT) offers the Master of Science degree with a thesis option. Students may choose courses that deal with virtual and augmented reality, product lifecycle management, and interactive media research.” Below are some degree theses on the aforementioned subjects and topics.

  • Nicoletta Adamo-Villani
  • Bedrich Benes
  • Vetria Byrd
  • Yingjie Chen
  • Patrick Connolly
  • Esteban Garcia
  • Ronald Glotzbach
  • Nathan Hartman
  • Raymond Hassan
  • Craig Miller
  • James Mohler
  • Carlos Morales
  • Amy Mueller
  • Paul Parsons
  • Nancy Rasche
  • Mihaela Vorvoreanu
  • David Whittinghill

Theses from 2017 2017

A Pattern Approach to Examine the Design Space of Spatiotemporal Visualization , Chen Guo

Theses from 2016 2016

Zephyr: A social psychology-based mobile application for long-distance romantic partners , Dhiraj Bodicherla

A study of how Chinese ink painting features can be applied to 3D scenes and models in real-time rendering , Muning Cao

Quit playing with your watch: Perceptions of smartwatch use , Christopher M. Gaeta

Inter-color NPR lines: A comparison of rendering techniques , Donald G. Herring

Gesture based non-obstacle interaction on mobile computing devices for dirty working environment , William B. Huynh

Implementation and validation of a probabilistic open source baseball engine (POSBE): Modeling hitters and pitchers , Rhett Tracy Schaefer

E-commerce mental models of upper middle class Chinese female consumers in Beijing , Yunfan Song

Theses from 2015 2015

Framework for functional tree simulation applied to 'golden delicious' apple trees , Marek Fiser

Teaching introductory game development with unreal engine: Challenges, strategies, and experiences , Nicholas A. Head

Simulating depth of field using per-pixel linked list buffer , Yang Liu

Dynamic textures , Illia Ziamtsov

Theses from 2014 2014

PERCEPTIONS AND EXPECTATIONS ABOUT THE USE OF SOCIAL MEDIA TO RAISE SITUATIONAL AWARENESS IN EMERGENCY EVENTS , Israa Bukhari

USABILITY TESTING OF THE M.A.E.G.U.S. SERIOUS GAME , James He

JUST NOTICEABLE DIFFERENCE SURVEY OF COMPUTER GENERATED IMAGERY USING NORMAL MAPS , Michael Edward Hoerter

THE EFFECT OF COLOR ON EMOTIONS IN ANIMATED FILMS , Andrew Kennedy

Usability of immersive virtual reality input devices , Christopher G. Mankey

M.A.E.G.U.S: MEASURING ALTERNATE ENERGY GENERATION VIA UNITY SIMULATION , Kavin Muhilan Nataraja

Augmented Reality Application Utility For Aviation Maintenance Work Instruction , John Bryan Pourcho

Senescence: An Aging based Character Simulation Framework , Suren Deepak Rajasekaran

Evaluating Optimum Levels Of Detail For 3d Interactive Aviation Maintenance Instructions , Nicholas Rohe

Integration of Z-Depth in Compositing , Kayla Steckel

Computer vision aided lip movement correction to improve English pronunciation , Shuang Wei

Computer animation for learning building construction management: A comparative study of first-person versus third-person view , Jun Yu

Theses from 2013 2013

Pilot Study of a Kinect-Based Video Game to Improve Physical Therapy Treatment , Jacob Samuel Brown

A Study Of The Effects Of Computer Animated Character Body Style On Perception Of Facial Expression , Katherine Cissell

Investigating the Effect Specific Credits of the LEED Rating System have on the Energy Performance of an Existing Building , Richelle Fosu

The Effects Of Parallax Scrolling On User Experience And Preference In Web Design , Dede M. Frederick

UNDERSTANDING VERIFICATION AND VALIDATION OF PRODUCT MODEL DATA IN INDUSTRY , Joseph Gerace

An Examination of Presentation Strategies for Textual Data in Augmented Reality , Kanrawi Kitkhachonkunlaphat

MULTIFUNCTIONAL FURNITURE FOR UNDERPRIVILEGED COMMUNITIES: A MILESTONE IN SUSTAINABLE DEVELOPMENT , Farah Nasser

IMPACT OF GRAPHICAL FIDELITY ON A PLAYER’S EMOTIONAL RESPONSE IN VIDEO GAMES , Vivianette Ocasio De Jesus

CORRELATING THE PURDUE SPATIAL VISUALIZATION TEST WITH THE WONDERLIC PERSONNEL TEST FOR AMERICAN FOOTBALL PLAYERS , Karthik Sukumar

Theses from 2012 2012

Defining Industry Expectations and Misconceptions of Art and Technology Co-Creativity , Vanessa C. Brasfield

DEFINING INDUSTRY EXPECTATIONS AND MISCONCEPTIONS OF ART AND TECHNOLOGY CO-CREATIVITY , Vanessa C. Brasfield

Social Media Marketing in a Small Business: A Case Study , Sarah Cox

User Assisted Tree Reconstruction from Point Clouds , William P. Leavenworth II

EFFECTS OF AUGMENTED REALITY PRESENTATIONS ON CONSUMER'S VISUAL PERCEPTION OF FLOOR PLANS , April L. Lutheran

An Analysis of Step, Jt, and Pdf Format Translation Between Constraint-based Cad Systems with a Benchmark Model , Dillon McKenzie-Veal

The Effect of Supplemental Pictorial Freehand Sketches on the Construction of CAD Models , Maria Nizovtseva

Towards the Development of Cost Metrics for Inadequate Interoperability , Kyle L. Sigo

The Effects of Microblogging in the Classroom on Communication , Alex Vernacchia

Incorporating Reverse Engineering Methodology into Engineering Curricula , Trevor Wanamaker

Research on the Relationship between Story and the Popularity of Animated Movies , Meng Wang

Sketching 3D Animation , Innfarn Yoo

Theses from 2011 2011

How do Millennial Engineering and Technology Students Experience Learning Through Traditional Teaching Methods Employed in the University Setting? , Elizabeth A. Howard

WHAT IS THE EFFECT OF REAL VERSUS AUGMENTED MODELS FOR THE ADVANCEMENT OF SPATIAL ABILITY BASED ON HAPTIC OR VISUAL LEARNING STYLE OF ENTRY-LEVEL ENGINEERING GRAPHICS STUDENTS? , Katie L. Huffman

Stereoscopic Visualization as a Tool For Learning Astronomy Concepts , Norman M. Joseph

Recruiting for Higher Education: The Roles that Print, Web, and Social Media Play in the Decision Process for Prospective Students , Brandon X. Karcher

Comparisons Between Educational Map Software Displaying Soil Data , Laura A. Kocur

Visual Learning Styles Among Digital Natives , Eric Palmer

Using A Serious Game To Motivate High School Students To Want To Learn About History , Marin M. Petkov

Adopting Game Technology for Architectural Visualization , Scott A. Schroeder

GPU-Based Global Illumination Using Lightcuts , Tong Zhang

Theses from 2010 2010

Effects of Lighting Phenomena on Perceived Realism of Rendered Water-rich Virtual Environments , Micah L. Bojrab

Full CUDA Implementation Of GPGPU Recursive Ray-Tracing , Andrew D. Britton

An Examination of Social Presence in Video Conferencing vs. an Augmented Reality Conferencing Application , Travis B. Faas

A Study of the Effects of Immersion on Short-term Spatial Memory , Eric A. Johnson

Evaluating the Efficacy of Clustered Visualization in Exploratory Search Tasks , Sarika S. Kothari

Large-Scale 3D Visualization of Doppler Reflectivity Data , Peter Kristof

Data Structures And Techniques For Visualization Of Large Volumetric Carbon Dioxide Datasets In A Real Time Experience , Jason B. Lambert

A Comparison of Peer Evaluation: The Evaluation App versus DeviantArt , Brian M. Mccreight

Evaluating User Modality Preference Effect On Cognitive Load In A Multimedia , Justin V. Scott

The Small and Medium Enterprise's Perspective of Product Data Management , Karen Waldenmeyer

Advanced Search

  • Notify me via email or RSS
  • Purdue Libraries
  • Purdue University Press Open Access Collections

Links for Authors

  • Policies and Help Documentation
  • Submit Research
  • Collections
  • Disciplines

Home | About | FAQ | My Account | Accessibility Statement

Privacy Copyright

  •  Contact

logo UPC

  • Do you want to study a bachelor's degree?
  • Available places
  • Reassessment
  • Computer Engineering
  • Software Engineering
  • Information Systems
  • Information Technologies
  • Competences
  • Competences for degree subjects
  • Upcoming defences
  • Thesis offers

Learning Outcomes

  • Do you want to study a Master Degree?
  • Academic Regulations
  • Advanced Computing
  • Computer Graphics and Virtual Reality
  • Computer Networks and Distributed Systems
  • Data Science
  • High Performance Computing
  • Service Engineering
  • Master in Cybersecurity
  • Gender Competency
  • Master in Pure and Applied Logic
  • Master in Computational Modelling in Physics, Chemistry and Biochemistry
  • Administrative Procedures
  • Academic calendars
  • Extinct Curriculums
  • Academic stays
  • Research Visit
  • Mobility Calendar
  • Information Sessions
  • Mobility experiences
  • Before you leave
  • When you arrive
  • Before you return
  • When you return
  • Internship abroad
  • Other activities abroad
  • Double degrees
  • Mobility Programs
  • University Networks
  • Partner universities
  • Departments
  • Research Centers
  • Research Groups
  • Posting offers
  • Offers list
  • FIB Visiona
  • Sponsorship
  • The school in Figures
  • Computer Labs
  • Teaching laboratories
  • Group work classroom
  • Auditorium Manuel Martí Recober
  • Conference Room
  • Rector Gabriel Ferraté Library
  • How to study remotely
  • Service catalog
  • IT Guide for new students
  • Campus Nord Hybrid Classrooms
  • Associations
  • Internal Quality Assurance System
  • Qualification assessment
  • Statistical data
  • FIB Quality policy and goals
  • CERN (Conseil Européen pour le Recherche Nucléaire)
  • Latin America
  • National Institute of Informatics (NII) Tokyo
  • USA grant programs
  • Automatic Control
  • Computer Architecture
  • Computer Science
  • Mathematics
  • Services and Information System Engineering
  • Statistics and Operations Research
  • BSC-CNS - Barcelona Supercomputing Center
  • CCABA – Advanced Broadband Communications Center
  • CEBIM - Molecular Biotechnology Centre
  • CREB - Research Centre for Biomedical Engineering
  • CRnE - Center for Research in NanoEngineering
  • IDEAI - Intelligent Data Science and Artificial Intelligence Research Center
  • TALP - Center for Language and Speech Technologies and Applications
  • ALBCOM - Algorithms, Computational Biology, Complexity and Formal Methods
  • ARCO - Architectures and Compilers
  • CAP - High Performace Computing Group
  • CBA - Broadband Communications Systems
  • CNDS - Computer Networks and Distributed Systems
  • DAMA-UPC - Data Management Group
  • DCCG - Research group on discrete, combinatorial and computational geometry
  • DMAG - Distributed Multimedia Applications Group
  • GESSI - Group of Software and Service Engineering
  • GIE - Engineering Informatics Group
  • GNOM - Group of Numerical Optimization and Modelling
  • GPLN - Natural Language Processing Group
  • GRBIO - Biostatistics and Bioinformatics Research Group
  • GREC - Knowledge Engineering Research Group
  • GRINS - Intelligent Robots and Systems
  • KEMLG - Knowledge Engineering and Machine Learning Group
  • LARCA - Relational Algorithmics, Complexity and Learning Laboratory
  • LOGPROG - Logic and Programming
  • MD - Discrete Mathematics
  • MPI - Information Modelling & Processing
  • SIMCON - Computer Simulation in Condensed Matter Research Group
  • SOCO - Soft Computing
  • SUSHITOS - Services for Ubiquitous Social and Humanistic Information Technologies and Open Source Research Group
  • VIS - Vision and Intelligent Systems
  • ViRVIG - Visualisation, Virtual Reality and Graphic Interaction Research Group
  • H2020 - BIG IoT
  • eHealth Eurocampus
  • Jedi Mobile Apps Lab
  • Social Point Lab
  • School Board
  • Standing Committee
  • A5S104 Social Point Lab

You are here

  • Coordinator

Specialization teachers

  • Technical competences

Students are expected to have a bachelor's degree in Computer Science, Mathematics, Engineering or related fields. Candidates with weak computer programming skills will also be considered but they may be required to take complementary courses.

Students of the master’s degree will develop expertise in modeling and treating geometric and volumetric data, as well as knowledge of the management, manipulation, interaction and rendering of highly complex geometric systems with application in fields such as industrial design, computer games, urban design, medical imaging, cultural heritage,...

Specialization coordinator

Transversal competences, entrepreneurship and innovation.

  • Responsable JOAN CARLES GIL MARTIN
  • G1.1 To have initiatives and acquire basic knowledge about organizations and to familiarize oneself with the instruments and techniques of generation and management of ideas, which allow solving known problems, and generate opportunities.
  • G1.2 To have initiatives which generate opportunities, new objects or solutions, with a process and market implementation vision, and to imply other team members in projects which have to be developed (capacity to perform autonomously).
  • G1.3 To have strong decision-making skills. To use knowledge and strategic skills for the creation and management of projects, apply systematic solutions to complex problems, and design and manage the innovation in the organization. To demonstrate flexibility and professionalism when developing her work.

SUSTAINABILITY AND SOCIAL COMMITMENT

  • Responsable JOAN CLIMENT VILARÓ, JOSE M. CABRÉ GARCIA
  • G2.1 To analyse the global situation in a systematic and critic way. To be capable of recognising the social and environmental implications of the professional activity of computer science. To understand the role of engineering as a profession, its role in the society and the ethical and professional responsibility of the technical informatics engineer. To value the compromise with equal opportunities principles, peace culture and democratic values.
  • G2.2 To apply sustainability criteria and the deontological codes of the profession in the design and evaluation of technological solutions. To identify the necessity to apply the legislation, regulations and normatives, specially the ones affecting the technical informatics engineer profession. To analyse and evaluate the environmental impact of the technical solutions in the ICT field.
  • G2.3 To take into account the social, economical and environmental dimensions, and the privacy right when applying solutions and carry out project which will be coherent with the human development and sustainability.

THIRD LANGUAGE

  • Responsable ANTONIA SOLER CERVERA
  • G3.1 To understand and use effectively handbooks, products specifications and other technical information written in English.
  • G3.2 To study using resources written in English. To write a report or a technical document in English. To participate in a technical meeting in English.
  • G3.3 To conduct an oral presentation in English and answer questions from the audience. To work effectively in an international context, communicating orally in English with people of different nationalities.

EFFECTIVE ORAL AND WRITTEN COMMUNICATION

  • Responsable SILVIA LLORENTE VIEJO
  • G4.1 To plan the oral communication, respond properly to the formulated questions and redact texts of a basic level with orthographic and grammatical correction. To structure correctly the contents of a technical report. To select relevant materials to prepare a topic and synthesize the contents. To respond properly when asked.
  • G4.2 To use strategies to prepare and perform oral presentations and write texts and documents with a coherent content, adequate structure and style and a good orthographic and grammatical level. To perform an oral presentation in front of a limited audience. To choose properly the contents, style, timing and format of the presentation. To be capable of communicating effectively with the user in a non-technical language, and understand its needs.
  • G4.3 To communicate clearly and efficiently in oral and written presentations about complex topics, becoming adapted to the situation, the type of audience and the communication goals, using the strategies and the adequate means. To analyse, value and respond adequately to the questions of the audience.
  • Responsable ALICIA AGENO PULIDO
  • G5.1 Capacity to collaborate in a unidisciplinary environment. To identify the objectives of the group and collaborate in the design of the strategy and the working plan to achieve them. To identify the responsibilities of each component of the group and assume the personal compromise of the assigned task. To evaluate and present the own results. To identify the value of the cooperation and exchange information with the other components of the group. To exchange information about the group progress and propose strategies to improve its operation.
  • G5.2 To plan the objectives, operation rules, responsibilities, agenda and review procedure of the work. To identify conflicts, negotiate and solve them in a effective way. To adapt oneself to different kinds of groups (big/small, technical/mixed, same space/at distance). To interact efficiently and promote the participation with other group members.
  • G5.3 To identify the roles, skills and weaknesses of the different members of the group. To propose improvements in the group structure. To interact with efficacy and professionalism. To negotiate and manage conflicts in the group. To recognize and give support or assume the leader role in the working group. To evaluate and present the results of the tasks of the group. To represent the group in negotiation involving other people. Capacity to collaborate in a multidisciplinary environment. To know and apply the techniques for promoting the creativity.

INFORMATION LITERACY

  • Responsable GLADYS MIRIAM UTRERA IGLESIAS
  • G6.1 To identify the own needs of information and to use the available collections, locations and services to design and execute simple searches suited to the thematic scope. To classify the gathered information and synthesize it. To value the intellectual property and cite properly the sources.
  • G6.2 After identifying the parts of an academic document and organizing the bibliographic references, to design and execute a good strategy to make an advanced search with specialized information resources, selecting the pertinent information taking into account relevance and quality criteria.
  • G6.3 To plan and use the necessary information for an academic essay (for example, the final project of the grade) using critical reflection about the used information resources. To manage information in a competent, independent and autonomous way. To evaluate the found information and identify its deficiencies.

AUTONOMOUS LEARNING

  • Responsable JOAN ARANDA LÓPEZ
  • G7.1 Directed learning: perform the assigned tasks in the planned time, working with the indicated information sources according to the guidelines of the teacher or tutor. To identify the progress and accomplishment grade of the learning goals. To identify strong and weak points.
  • G7.2 Guided learning: perform assigned tasks according to basic orientations given by the teaching staff; to decide the needed time for each task, including personal contributions and expanding the indicated information sources. Appropriate use of study guides. Capacity to take decisions based on objective criteria (experimental, scientific or simulation available data). Capacity to evaluate your own strong and weak points, and perform consequently.
  • G7.3 Autonomous learning: capacity to plan and organize personal work. To apply the acquired knowledge when performing a task, in function of its suitability and importance, decide how to perform it and the needed time, and select the most adequate information sources. To identify the importance of establishing and maintaining contacts with students, teacher staff and professionals (networking). To identify information forums about ICT engineering, its advances and its impact in the society (IEEE, associations, etc.).

APPROPIATE ATTITUDE TOWARDS WORK

  • Responsable CARME MARTIN ESCOFET
  • G8.1 To have a wide vision of the possibilities of the career in the field of informatics engineering. To have a positive and receptive attitude towards the quality in the development of the profession.
  • G8.2 To be rigorous in the professional development. To be motivated and have a proactive attitude for the quality in the work. Capacity to adapt oneself to organizational or technological changes. Capacity to work in situations with information shortage and/or time and/or resources restrictions.
  • G8.3 To be motivated for the professional development, to face new challenges and the continuous improvement. To have capacity to work in situations with a lack of information.
  • Responsable KARINA GIBERT OLIVERAS
  • G9.1 Critical, logical and mathematical reasoning capacity. Capacity to understand abstraction and use it properly.
  • G9.2 Analysis and synthesis capacity, capacity to solve problems in its field, and to interpret the results in a critical way. Abstraction capacity: capacity to create and use models which reflect real situations. Capacity to design and perform simple experiments and to analyse and interpret their results in a critical way.
  • G9.3 Critical capacity, evaluation capacity.

COMMON TECHNICAL COMPETENCIES

  • CT1.1A To demonstrate knowledge and comprehension about the fundamentals of computer usage and programming, about operating systems, databases and, in general, about computer programs applicable to the engineering.
  • CT1.1B To demonstrate knowledge and comprehension about the fundamentals of computer usage and programming. Knowledge about the structure, operation and interconnection of computer systems, and about the fundamentals of its programming.
  • CT1.2A To interpret, select and value concepts, theories, uses and technological developments related to computer science and its application derived from the needed fundamentals of mathematics, statistics and physics. Capacity to solve the mathematical problems presented in engineering. Talent to apply the knowledge about: algebra, differential and integral calculus and numeric methods; statistics and optimization.
  • CT1.2B To interpret, select and value concepts, theories, uses and technological developments related to computer science and its application derived from the needed fundamentals of mathematics, statistics and physics. Capacity to understand and dominate the physical and technological fundamentals of computer science: electromagnetism, waves, circuit theory, electronics and photonics and its application to solve engineering problems.
  • CT1.2C To use properly theories, procedures and tools in the professional development of the informatics engineering in all its fields (specification, design, implementation, deployment and products evaluation) demonstrating the comprehension of the adopted compromises in the design decisions.
  • CT2.1 To demonstrate knowledge and capacity to apply the principles, methodologies and life cycles of software engineering.
  • CT2.2 To demonstrate knowledge and capacity to apply the characteristics, functionalities and structure of data bases, allowing an adequate use, design, analysis and implementation of applications based on them.
  • CT2.3 To design, develop, select and evaluate computer applications, systems and services and, at the same time, ensure its reliability, security and quality in function of ethical principles and the current legislation and normative.
  • CT2.4 To demonstrate knowledge and capacity to apply the needed tools for storage, processing and access to the information system, even if they are web-based systems.
  • CT2.5 To design and evaluate person-computer interfaces which guarantee the accessibility and usability of computer systems, services and applications.
  • CT3.1 To understand and explain reasonably the basic economical concepts, the objectives and the instruments of economical politics and their influence in the economical activity.
  • CT3.2 To know and describe the main processes of the functional areas of a company and the existent links between them, which make possible the coordination and integration in a group.
  • CT3.3 To be able to find and interpret basic information for evaluating the economic environment of the organization.
  • CT3.4 To know the basic financial concepts which allow valuing the costs and benefits of a project or different alternatives, monitor a budget, control the cost, etc.
  • CT3.5 To identify the use possibilities and benefits which can be derived from an application in the different business software typologies and existent ICT services.
  • CT3.6 To demonstrate knowledge about the ethical dimension of the company: in general, the social and corporative responsibility and, concretely, the civil and professional responsibilities of the informatics engineer.
  • CT3.7 To demonstrate knowledge about the normative and regulation of informatics in a national, European and international scope.
  • CT4.1 To identify the most adequate algorithmic solutions to solve medium difficulty problems.
  • CT4.2 To reason about the correction and efficiency of an algorithmic solution.
  • CT4.3 To demonstrate knowledge and capacity to apply the fundamental principles and the basic techniques of the intelligent systems and its practical application.
  • CT5.1 To choose, combine and exploit different programming paradigms, at the moment of building software, taking into account criteria like ease of development, efficiency, portability and maintainability.
  • CT5.2 To know, design and use efficiently the most adequate data types and data structures to solve a problem.
  • CT5.3 To design, write, test, refine, document and maintain code in an high level programming language to solve programming problems applying algorithmic schemas and using data structures.
  • CT5.4 To design the programs¿ architecture using techniques of object orientation, modularization and specification and implementation of abstract data types.
  • CT5.5 To use the tools of a software development environment to create and develop applications.
  • CT5.6 To demonstrate knowledge and capacity to apply the fundamental principles and basic techniques of parallel, concurrent, distributed and real-time programming.
  • CT6.1 To demonstrate knowledge and capacity to manage and maintain computer systems, services and applications.
  • CT6.2 To demonstrate knowledge, comprehension and capacity to evaluate the structure and architecture of computers, and the basic components that compound them.
  • CT6.3 To demonstrate knowledge about the characteristics, functionalities and structure of the Operating Systems allowing an adequate use, management and design, as well as the implementation of applications based on its services.
  • CT6.4 To demonstrate knowledge and capacity to apply the characteristics, functionalities and structure of the Distributed Systems and Computer and Internet Networks guaranteeing its use and management, as well as the design and implementation of application based on them.
  • CT7.1 To demonstrate knowledge about metrics of quality and be able to use them.
  • CT7.2 To evaluate hardware/software systems in function of a determined criteria of quality.
  • CT7.3 To determine the factors that affect negatively the security and reliability of a hardware/software system, and minimize its effects.
  • CT8.1 To identify current and emerging technologies and evaluate if they are applicable, to satisfy the users needs.
  • CT8.2 To assume the roles and functions of the project manager and apply, in the organizations field, the techniques for managing the timing, cost, financial aspects, human resources and risk.
  • CT8.3 To demonstrate knowledge and be able to apply appropriate techniques for modelling and analysing different kinds of decisions.
  • CT8.4 To elaborate the list of technical conditions for a computers installation fulfilling all the current standards and normative.
  • CT8.5 To manage and solve problems and conflicts using the capacity to generate alternatives or future scenarios analysed properly, integrating the uncertainty aspects and the multiple objectives to consider.
  • CT8.6 To demonstrate the comprehension of the importance of the negotiation, effective working habits, leadership and communication skills in all the software development environments.
  • CT8.7 To control project versions and configurations.

COMPUTER ENGINEERING SPECIALIZATION

  • CEC1.1 To design a system based on microprocessor/microcontroller.
  • CEC1.2 To design/configure an integrated circuit using the adequate software tools.
  • CEC2.1 To analyse, evaluate, select and configure hardware platforms for the development and execution of computer applications and services.
  • CEC2.2 To program taking into account the hardware architecture, using assembly language as well as high-level programming languages.
  • CEC2.3 To develop and analyse software for systems based on microprocessors and its interfaces with users and other devices.
  • CEC2.4 To design and implement system and communications software.
  • CEC2.5 To design and implement operating systems.
  • CEC3.1 To analyse, evaluate and select the most adequate hardware and software platform to support embedded and real-time applications.
  • CEC3.2 To develop specific processors and embedded systems; to develop and optimize the software of these systems. 
  • CEC4.1 To design, deploy, administrate and manage computer networks.
  • CEC4.2 To demonstrate comprehension, to apply and manage the guarantee and security of computer systems.

COMPUTER SCIENCE SPECIALIZATION

  • CCO1.1 To evaluate the computational complexity of a problem, know the algorithmic strategies which can solve it and recommend, develop and implement the solution which guarantees the best performance according to the established requirements.
  • CCO1.2 To demonstrate knowledge about the theoretical fundamentals of programming languages and the associated lexical, syntactical and semantic processing techniques and be able to apply them to create, design and process languages.
  • CCO1.3 To define, evaluate and select platforms to develop and produce hardware and software for developing computer applications and services of different complexities.
  • CCO2.1 To demonstrate knowledge about the fundamentals, paradigms and the own techniques of intelligent systems, and analyse, design and build computer systems, services and applications which use these techniques in any applicable field.
  • CCO2.2 Capacity to acquire, obtain, formalize and represent human knowledge in a computable way to solve problems through a computer system in any applicable field, in particular in the fields related to computation, perception and operation in intelligent environments.
  • CCO2.3 To develop and evaluate interactive systems and systems that show complex information, and its application to solve person-computer interaction problems.
  • CCO2.4 To demonstrate knowledge and develop techniques about computational learning; to design and implement applications and system that use them, including these ones dedicated to the automatic extraction of information and knowledge from large data volumes.
  • CCO2.5 To implement information retrieval software.
  • CCO2.6 To design and implement graphic, virtual reality, augmented reality and video-games applications.
  • CCO3.1 To implement critical code following criteria like execution time, efficiency and security.
  • CCO3.2 To program taking into account the hardware architecture, using assembly language as well as high-level programming languages.

INFORMATION SYSTEMS SPECIALIZATION

  • CSI2.1 To demonstrate comprehension and apply the management principles and techniques about quality and technological innovation in the organizations.
  • CSI2.2 To conceive, deploy, organize and manage computer systems and services, in business or institutional contexts, to improve the business processes; to take responsibility and lead the start-up and the continuous improvement; to evaluate its economic and social impact.
  • CSI2.3 To demonstrate knowledge and application capacity of extraction and knowledge management systems .
  • CSI2.4 To demostrate knowledge and capacity to apply systems based on Internet (e-commerce, e-learning, etc.).
  • CSI2.5 To demostrate knowledge and capacity to apply business information systems (ERP, CRM, SCM, etc.).
  • CSI2.6 To demonstrate knowledge and capacity to apply decision support and business intelligence systems.
  • CSI2.7 To manage the presence of the organization in Internet.
  • CSI3.1 To demonstrate comprehension of the principles of risks evaluation and apply them correctly when elaborating and executing operation plans.
  • CSI3.2 To develop the information system plan of an organization.
  • CSI3.3 To evaluate technological offers for the development of information and management systems.
  • CSI3.4 To develop business solutions through the deployment and integration of hardware and software systems.
  • CSI3.5 To propose and coordinate changes to improve the operation of the systems and the applications.
  • CSI4.1 To participate actively in the specification of the information and communication systems.
  • CSI4.3 To administrate databases (CES1.6).
  • CSI4.2 To participate actively in the design, implementation and maintenance of the information and communication systems.
  • CSI1 To demonstrate comprehension and apply the principles and practices of the organization, in a way that they could link the technical and management communities of an organization, and participate actively in the user training.

SOFTWARE ENGINEERING SPECIALIZATION

  • CES1.1 To develop, maintain and evaluate complex and/or critical software systems and services.
  • CES1.2 To solve integration problems in function of the strategies, standards and available technologies
  • CES1.3 To identify, evaluate and manage potential risks related to software building which could arise.
  • CES1.4 To develop, mantain and evaluate distributed services and applications with network support.
  • CES1.5 To specify, design, implement and evaluate databases.
  • CES1.6 To administrate databases (CIS4.3).
  • CES1.7 To control the quality and design tests in the software production
  • CES1.8 To develop, mantain and evaluate control and real-time systems.
  • CES1.9 To demonstrate the comprehension in management and government of software systems.
  • CES2.1 To define and manage the requirements of a software system.
  • CES2.2 To design adequate solutions in one or more application domains, using software engineering methods which integrate ethical, social, legal and economical aspects.
  • CES3.1 To develop multimedia services and applications.
  • CES3.2 To design and manage a data warehouse.

INFORMATION TECHNOLOGY SPECIALIZATION

  • CTI1.1 To demonstrate understanding the environment of an organization and its needs in the field of the information and communication technologies.
  • CTI1.2 To select, design, deploy, integrate and manage communication networks and infrastructures in a organization.
  • CTI1.3 To select, deploy, integrate and manage information system which satisfy the organization needs with the identified cost and quality criteria.
  • CTI1.4 To select, design, deploy, integrate, evaluate, build, manage, exploit and maintain the hardware, software and network technologies, according to the adequate cost and quality parameters.
  • CTI2.1 To manage, plan and coordinate the management of the computers infrastructure: hardware, software, networks and communications.
  • CTI2.2 To administrate and maintain applications, computer systems and computer networks (the knowledge and comprehension levels are described in the common technical competences).
  • CTI2.3 To demonstrate comprehension, apply and manage the reliability and security of the computer systems (CEI C6).
  • CTI3.1 To conceive systems, applications and services based on network technologies, taking into account Internet, web, electronic commerce, multimedia, interactive services and ubiquitous computation.
  • CTI3.2 To implement and manage ubiquitous systems (mobile computing systems).
  • CTI3.3 To design, establish and configure networks and services.
  • CTI3.4 To design communications software.
  • CTI4 To use methodologies centred on the user and the organization to develop, evaluate and manage applications and systems based on the information technologies which ensure the accessibility, ergonomics and usability of the systems.
  • CG1 Capability to plan, design and implement products, processes, services and facilities in all areas of Artificial Intelligence.
  • CG2 Capability to lead, plan and supervise multidisciplinary teams.
  • CG3 Capacity for modeling, calculation, simulation, development and implementation in technology and company engineering centers, particularly in research, development and innovation in all areas related to Artificial Intelligence.
  • CG4 Capacity for general management, technical management and research projects management, development and innovation in companies and technology centers in the area of Artificial Intelligence.
  • CEA1 Capability to understand the basic principles of the Multiagent Systems operation main techniques , and to know how to use them in the environment of an intelligent service or system.
  • CEA2 Capability to understand the basic operation principles of Planning and Approximate Reasoning main techniques, and to know how to use in the environment of an intelligent system or service.
  • CEA3 Capability to understand the basic operation principles of Machine Learning main techniques, and to know how to use on the environment of an intelligent system or service.
  • CEA4 Capability to understand the basic operation principles of Computational Intelligence main techniques, and to know how to use in the environment of an intelligent system or service.
  • CEA5 Capability to understand the basic operation principles of Natural Language Processing main techniques, and to know how to use in the environment of an intelligent system or service.
  • CEA6 Capability to understand the basic operation principles of Computational Vision main techniques, and to know how to use in the environment of an intelligent system or service.
  • CEA7 Capability to understand the problems, and the solutions to problems in the professional practice of Artificial Intelligence application in business and industry environment.
  • CEA8 Capability to research in new techniques, methodologies, architectures, services or systems in the area of ??Artificial Intelligence.
  • CEA9 Capability to understand Multiagent Systems advanced techniques, and to know how to design, implement and apply these techniques in the development of intelligent applications, services or systems.
  • CEA10 Capability to understand advanced techniques of Human-Computer Interaction, and to know how to design, implement and apply these techniques in the development of intelligent applications, services or systems.
  • CEA11 Capability to understand the advanced techniques of Computational Intelligence, and to know how to design, implement and apply these techniques in the development of intelligent applications, services or systems.
  • CEA12 Capability to understand the advanced techniques of Knowledge Engineering, Machine Learning and Decision Support Systems, and to know how to design, implement and apply these techniques in the development of intelligent applications, services or systems.
  • CEA13 Capability to understand advanced techniques of Modeling , Reasoning and Problem Solving, and to know how to design, implement and apply these techniques in the development of intelligent applications, services or systems.
  • CEA14 Capability to understand the advanced techniques of Vision, Perception and Robotics, and to know how to design, implement and apply these techniques in the development of intelligent applications, services or systems.

PROFESSIONAL

  • CEP1 Capability to solve the analysis of information needs from different organizations, identifying the uncertainty and variability sources.
  • CEP2 Capability to solve the decision making problems from different organizations, integrating intelligent tools.
  • CEP3 Capacity for applying Artificial Intelligence techniques in technological and industrial environments to improve quality and productivity.
  • CEP4 Capability to design, write and report about computer science projects in the specific area of ??Artificial Intelligence.
  • CEP5 Capability to design new tools and new techniques of Artificial Intelligence in professional practice.
  • CEP6 Capability to assimilate and integrate the changing economic, social and technological environment to the objectives and procedures of informatic work in intelligent systems.
  • CEP7 Capability to respect the legal rules and deontology in professional practice.
  • CEP8 Capability to respect the surrounding environment and design and develop sustainable intelligent systems.
  • CT1 Capability to know and understand a business organization and the science that defines its activity; capability to understand the labor rules and the relations between planning, industrial and commercial strategies, quality and profit.
  • CT2 Capability to know and understand the complexity of economic and social typical phenomena of the welfare society; capability to relate welfare with globalization and sustainability; capability to use technique, technology, economics and sustainability in a balanced and compatible way.
  • CT3 Ability to work as a member of an interdisciplinary team, as a normal member or performing direction tasks, in order to develop projects with pragmatism and sense of responsibility, making commitments taking into account the available resources.
  • CT4 Capacity for managing the acquisition, the structuring, analysis and visualization of data and information in the field of specialisation, and for critically assessing the results of this management.
  • CT5 Capability to be motivated for professional development, to meet new challenges and for continuous improvement. Capability to work in situations with lack of information.
  • CT6 Capability to evaluate and analyze on a reasoned and critical way about situations, projects, proposals, reports and scientific-technical surveys. Capability to argue the reasons that explain or justify such situations, proposals, etc..

ANALISIS Y SINTESIS

  • CT7 Capability to analyze and solve complex technical problems.

DIRECCIÓ I GESTIÓ

  • CDG1 Capability to integrate technologies, applications, services and systems of Informatics Engineering, in general and in broader and multicisciplinary contexts.
  • CDG2 Capacity for strategic planning, development, direction, coordination, and technical and economic management in the areas of Informatics Engineering related to: systems, applications, services, networks, infrastructure or computer facilities and software development centers or factories, respecting the implementation of quality and environmental criteria in multidisciplinary working environments .
  • CDG3 Capability to manage research, development and innovation projects in companies and technology centers, guaranteeing the safety of people and assets, the final quality of products and their homologation.
  • CTE1 Capability to model, design, define the architecture, implement, manage, operate, administrate and maintain applications, networks, systems, services and computer contents.
  • CTE2 Capability to understand and know how to apply the operation and organization of Internet, technologies and protocols for next generation networks, component models, middleware and services.
  • CTE3 Capability to secure, manage, audit and certify the quality of developments, processes, systems, services, applications and software products.
  • CTE4 Capability to design, develop, manage and evaluate mechanisms of certification and safety guarantee in the management and access to information in a local or distributed processing.
  • CTE5 Capability to analyze the information needs that arise in an environment and carry out all the stages in the process of building an information system.
  • CTE6 Capability to design and evaluate operating systems and servers, and applications and systems based on distributed computing.
  • CTE7 Capability to understand and to apply advanced knowledge of high performance computing and numerical or computational methods to engineering problems.
  • CTE8 Capability to design and develop systems, applications and services in embedded and ubiquitous systems .
  • CTE9 Capability to apply mathematical, statistical and artificial intelligence methods to model, design and develop applications, services, intelligent systems and knowledge-based systems.
  • CTE10 Capability to use and develop methodologies, methods, techniques, special-purpose programs, rules and standards for computer graphics.
  • CTE11 Capability to conceptualize, design, develop and evaluate human-computer interaction of products, systems, applications and informatic services.
  • CTE12 Capability to create and exploit virtual environments, and to the create, manageme and distribute of multimedia content.
  • CTR1 Capacity for knowing and understanding a business organization and the science that rules its activity, capability to understand the labour rules and the relationships between planning, industrial and commercial strategies, quality and profit. Capacity for developping creativity, entrepreneurship and innovation trend.
  • CTR2 Capability to know and understand the complexity of the typical economic and social phenomena of the welfare society. Capacity for being able to analyze and assess the social and environmental impact.
  • CTR3 Capacity of being able to work as a team member, either as a regular member or performing directive activities, in order to help the development of projects in a pragmatic manner and with sense of responsibility; capability to take into account the available resources.
  • CTR4 Capability to manage the acquisition, structuring, analysis and visualization of data and information in the area of informatics engineering, and critically assess the results of this effort.
  • CTR5 Capability to be motivated by professional achievement and to face new challenges, to have a broad vision of the possibilities of a career in the field of informatics engineering. Capability to be motivated by quality and continuous improvement, and to act strictly on professional development. Capability to adapt to technological or organizational changes. Capacity for working in absence of information and/or with time and/or resources constraints.
  • CTR6 Capacity for critical, logical and mathematical reasoning. Capability to solve problems in their area of study. Capacity for abstraction: the capability to create and use models that reflect real situations. Capability to design and implement simple experiments, and analyze and interpret their results. Capacity for analysis, synthesis and evaluation.

COMPUTER GRAPHICS AND VIRTUAL REALITY

  • CEE1.1 Capability to understand and know how to apply current and future technologies for the design and evaluation of interactive graphic applications in three dimensions, either when priorizing image quality or when priorizing interactivity and speed, and to understand the associated commitments and the reasons that cause them.
  • CEE1.2 Capability to understand and know how to apply current and future technologies for the evaluation, implementation and operation of virtual and / or increased reality environments, and 3D user interfaces based on devices for natural interaction.
  • CEE1.3 Ability to integrate the technologies mentioned in CEE1.2 and CEE1.1 skills with other digital processing information technologies to build new applications as well as make significant contributions in multidisciplinary teams using computer graphics.

COMPUTER NETWORKS AND DISTRIBUTED SYSTEMS

  • CEE2.1 Capability to understand models, problems and algorithms related to distributed systems, and to design and evaluate algorithms and systems that process the distribution problems and provide distributed services.
  • CEE2.2 Capability to understand models, problems and algorithms related to computer networks and to design and evaluate algorithms, protocols and systems that process the complexity of computer communications networks.
  • CEE2.3 Capability to understand models, problems and mathematical tools to analyze, design and evaluate computer networks and distributed systems.

ADVANCED COMPUTING

  • CEE3.1 Capability to identify computational barriers and to analyze the complexity of computational problems in different areas of science and technology as well as to represent high complexity problems in mathematical structures which can be treated effectively with algorithmic schemes.
  • CEE3.2 Capability to use a wide and varied spectrum of algorithmic resources to solve high difficulty algorithmic problems.
  • CEE3.3 Capability to understand the computational requirements of problems from non-informatics disciplines and to make significant contributions in multidisciplinary teams that use computing.

HIGH PERFORMANCE COMPUTING

  • CEE4.1 Capability to analyze, evaluate and design computers and to propose new techniques for improvement in its architecture.
  • CEE4.2 Capability to analyze, evaluate, design and optimize software considering the architecture and to propose new optimization techniques.
  • CEE4.3 Capability to analyze, evaluate, design and manage system software in supercomputing environments.

SERVICE ENGINEERING

  • CEE5.1 Capability to participate in improvement projects or to create service systems, providing in particular: a) innovation and research proposals based on new uses and developments of information technologies, b) application of the most appropriate software engineering and databases principles when developing information systems, c) definition, installation and management of infrastructure / platform necessary for the efficient running of service systems.
  • CEE5.2 Capability to apply obtained knowledge in any kind of service systems, being familiar with some of them, and thorough knowledge of eCommerce systems and their extensions (eBusiness, eOrganization, eGovernment, etc.).
  • CEE5.3 Capability to work in interdisciplinary engineering services teams and, provided the necessary domain experience, capability to work autonomously in specific service systems.
  • CG1 Capability to apply the scientific method to study and analyse of phenomena and systems in any area of Computer Science, and in the conception, design and implementation of innovative and original solutions.
  • CG3 Capacity for mathematical modeling, calculation and experimental designing in technology and companies engineering centers, particularly in research and innovation in all areas of Computer Science.
  • CG4 Capacity for general and technical management of research, development and innovation projects, in companies and technology centers in the field of Informatics Engineering.
  • CG5 Capability to apply innovative solutions and make progress in the knowledge to exploit the new paradigms of computing, particularly in distributed environments.
  • CG1 Capability to plan, calculate and design products, processes and facilities in all areas of Computer Science.
  • CG2 Capacity for management of products and installations of computer systems, complying with current legislation and ensuring the quality of service.
  • CG3 Capability to lead, plan and supervise multidisciplinary teams.
  • CG4 Capacity for mathematical modeling, calculation and simulation in technology and engineering companies centers, particularly in research, development and innovation tasks in all areas related to Informatics Engineering.
  • CG5 Capacity for the development, strategic planning, leadership, coordination and technical and financial management of projects in all areas of Informatics Engineering, keeping up with quality and environmental criteria.
  • CG6 Capacity for general management, technical management and research projects management, development and innovation in companies and technology centers in the area of Computer Science.
  • CG7 Capacity for implementation, direction and management of computer manufacturing processes, with guarantee of safety for people and assets, the final quality of the products and their homologation.
  • CG8 Capability to apply the acquired knowledge and to solve problems in new or unfamiliar environments inside broad and multidisciplinary contexts, being able to integrate this knowledge.
  • CG9 Capacity to understand and apply ethical responsibility, law and professional deontology of the activity of the Informatics Engineering profession.
  • CG10 Capacity to apply economics, human resources and projects management principles, as well as legislation, regulation and standardization of Informatics.
  • CB6 Ability to apply the acquired knowledge and capacity for solving problems in new or unknown environments within broader (or multidisciplinary) contexts related to their area of study.
  • CB7 Ability to integrate knowledges and handle the complexity of making judgments based on information which, being incomplete or limited, includes considerations on social and ethical responsibilities linked to the application of their knowledge and judgments.
  • CB8 Capability to communicate their conclusions, and the knowledge and rationale underpinning these, to both skilled and unskilled public in a clear and unambiguous way.
  • CB9 Possession of the learning skills that enable the students to continue studying in a way that will be mainly self-directed or autonomous.
  • CEC1 Ability to apply scientific methodologies in the study and analysis of phenomena and systems in any field of Information Technology as well as in the conception, design and implementation of innovative and original computing solutions.
  • CEC2 Capacity for mathematical modelling, calculation and experimental design in engineering technology centres and business, particularly in research and innovation in all areas of Computer Science.
  • CEC3 Ability to apply innovative solutions and make progress in the knowledge that exploit the new paradigms of Informatics, particularly in distributed environments.

TRANSVERSALS

  • CT1 Entrepreneurship and innovation. Know and understand the organization of a company and the sciences that govern its activity; Have the ability to understand labor standards and the relationships between planning, industrial and commercial strategies, quality and profit.
  • CT2 Sustainability and Social Commitment. To know and understand the complexity of economic and social phenomena typical of the welfare society; Be able to relate well-being to globalization and sustainability; Achieve skills to use in a balanced and compatible way the technique, the technology, the economy and the sustainability.
  • CT3 Efficient oral and written communication. Communicate in an oral and written way with other people about the results of learning, thinking and decision making; Participate in debates on topics of the specialty itself.
  • CT4 Teamwork. Be able to work as a member of an interdisciplinary team, either as a member or conducting management tasks, with the aim of contributing to develop projects with pragmatism and a sense of responsibility, taking commitments taking into account available resources.
  • CT5 Solvent use of information resources. Manage the acquisition, structuring, analysis and visualization of data and information in the field of specialty and critically evaluate the results of such management.
  • CT6 Autonomous Learning. Detect deficiencies in one's own knowledge and overcome them through critical reflection and the choice of the best action to extend this knowledge.
  • CT7 Third language. Know a third language, preferably English, with an adequate oral and written level and in line with the needs of graduates.
  • CT8 Gender perspective. An awareness and understanding of sexual and gender inequalities in society in relation to the field of the degree, and the incorporation of different needs and preferences due to sex and gender when designing solutions and solving problems.

TECHNICAL COMPETENCIES

  • CE1 Skillfully use mathematical concepts and methods that underlie the problems of science and data engineering.
  • CE2 To be able to program solutions to engineering problems: Design efficient algorithmic solutions to a given computational problem, implement them in the form of a robust, structured and maintainable program, and check the validity of the solution.
  • CE3 Analyze complex phenomena through probability and statistics, and propose models of these types in specific situations. Formulate and solve mathematical optimization problems.
  • CE4 Use current computer systems, including high performance systems, for the process of large volumes of data from the knowledge of its structure, operation and particularities.
  • CE5 Design and apply techniques of signal processing, choosing between different technological tools, including those of Artificial vision, speech recognition and multimedia data processing.
  • CE6 Build or use systems of processing and comprehension of written language, integrating it into other systems driven by the data. Design systems for searching textual or hypertextual information and analysis of social networks.
  • CE7 Demonstrate knowledge and ability to apply the necessary tools for the storage, processing and access to data.
  • CE8 Ability to choose and employ techniques of statistical modeling and data analysis, evaluating the quality of the models, validating and interpreting them.
  • CE9 Ability to choose and employ a variety of automatic learning techniques and build systems that use them for decision making, even autonomously.
  • CE10 Visualization of information to facilitate the exploration and analysis of data, including the choice of adequate representation of these and the use of dimensionality reduction techniques.
  • CE11 Within the corporate context, understand the innovation process, be able to propose models and business plans based on data exploitation, analyze their feasibility and be able to communicate them convincingly.
  • CE12 Apply the project management practices in the integral management of the data exploitation engineering project that the student must carry out in the areas of scope, time, economic and risks.
  • CE13 (End-of-degree work) Plan and design and carry out projects of a professional nature in the field of data engineering, leading its implementation, continuous improvement and valuing its economic and social impact. Defend the project developed before a university court.
  • CG1 To design computer systems that integrate data of provenances and very diverse forms, create with them mathematical models, reason on these models and act accordingly, learning from experience.
  • CG2 Choose and apply the most appropriate methods and techniques to a problem defined by data that represents a challenge for its volume, speed, variety or heterogeneity, including computer, mathematical, statistical and signal processing methods.
  • CG3 Work in multidisciplinary teams and projects related to the processing and exploitation of complex data, interacting fluently with engineers and professionals from other disciplines.
  • CG4 Identify opportunities for innovative data-driven applications in evolving technological environments.
  • CG5 To be able to draw on fundamental knowledge and sound work methodologies acquired during the studies to adapt to the new technological scenarios of the future.
  • CB1 That students have demonstrated to possess and understand knowledge in an area of ??study that starts from the base of general secondary education, and is usually found at a level that, although supported by advanced textbooks, also includes some aspects that imply Knowledge from the vanguard of their field of study.
  • CB2 That the students know how to apply their knowledge to their work or vocation in a professional way and possess the skills that are usually demonstrated through the elaboration and defense of arguments and problem solving within their area of ??study.
  • CB3 That students have the ability to gather and interpret relevant data (usually within their area of ??study) to make judgments that include a reflection on relevant social, scientific or ethical issues.
  • CB4 That the students can transmit information, ideas, problems and solutions to a specialized and non-specialized public.
  • CB5 That the students have developed those learning skills necessary to undertake later studies with a high degree of autonomy
  • CB7 Ability to integrate knowledge and handle the complexity of making judgments based on information which, being incomplete or limited, includes considerations on social and ethical responsibilities linked to the application of their knowledge and judgments.
  • CB10 Possess and understand knowledge that provides a basis or opportunity to be original in the development and/or application of ideas, often in a research context.
  • CG1 Identify and apply the most appropriate data management methods and processes to manage the data life cycle, considering both structured and unstructured data
  • CG2 Identify and apply methods of data analysis, knowledge extraction and visualization for data collected in disparate formats
  • CG3 Define, design and implement complex systems that cover all phases in data science projects
  • CG4 Design and implement data science projects in specific domains and in an innovative way
  • CT1 Know and understand the organization of a company and the sciences that govern its activity; have the ability to understand labor standards and the relationships between planning, industrial and commercial strategies, quality and profit. Being aware of and understanding the mechanisms on which scientific research is based, as well as the mechanisms and instruments for transferring results among socio-economic agents involved in research, development and innovation processes.
  • CT5 Achieving a level of spoken and written proficiency in a foreign language, preferably English, that meets the needs of the profession and the labour market.

GENDER PERSPECTIVE

  • CT6 An awareness and understanding of sexual and gender inequalities in society in relation to the field of the degree, and the incorporation of different needs and preferences due to sex and gender when designing solutions and solving problems.
  • CE1 Develop efficient algorithms based on the knowledge and understanding of the computational complexity theory and considering the main data structures within the scope of data science
  • CE2 Apply the fundamentals of data management and processing to a data science problem
  • CE3 Apply data integration methods to solve data science problems in heterogeneous data environments
  • CE4 Apply scalable storage and parallel data processing methods, including data streams, once the most appropriate methods for a data science problem have been identified
  • CE5 Model, design, and implement complex data systems, including data visualization
  • CE6 Design the Data Science process and apply scientific methodologies to obtain conclusions about populations and make decisions accordingly, from both structured and unstructured data and potentially stored in heterogeneous formats.
  • CE7 Identify the limitations imposed by data quality in a data science problem and apply techniques to smooth their impact
  • CE8 Extract information from structured and unstructured data by considering their multivariate nature.
  • CE9 Apply appropriate methods for the analysis of non-traditional data formats, such as processes and graphs, within the scope of data science
  • CE10 Identify machine learning and statistical modeling methods to use and apply them rigorously in order to solve a specific data science problem
  • CE11 Analyze and extract knowledge from unstructured information using natural language processing techniques, text and image mining
  • CE12 Apply data science in multidisciplinary projects to solve problems in new or poorly explored domains from a data science perspective that are economically viable, socially acceptable, and in accordance with current legislation
  • CE13 Identify the main threats related to ethics and data privacy in a data science project (both in terms of data management and analysis) and develop and implement appropriate measures to mitigate these threats
  • CE14 Execute, present and defend an original exercise carried out individually in front of an academic commission, consisting of an engineering project in the field of data science synthesizing the competences acquired in the studies
  • CG1 To ideate, draft, organize, plan and develop projects in the field of artificial intelligence.
  • CG2 To use the fundamental knowledge and solid work methodologies acquired during the studies to adapt to the new technological scenarios of the future.
  • CG3 To define, evaluate and select hardware and software platforms for the development and execution of computer systems, services and applications in the field of artificial intelligence.
  • CG4 Reasoning, analyzing reality and designing algorithms and formulations that model it. To identify problems and construct valid algorithmic or mathematical solutions, eventually new, integrating the necessary multidisciplinary knowledge, evaluating different alternatives with a critical spirit, justifying the decisions taken, interpreting and synthesizing the results in the context of the application domain and establishing methodological generalizations based on specific applications.
  • CG5 Work in multidisciplinary teams and projects related to artificial intelligence and robotics, interacting fluently with engineers and professionals from other disciplines.
  • CG6 To identify opportunities for innovative applications of artificial intelligence and robotics in constantly evolving technological environments.
  • CG7 To interpret and apply current legislation, as well as specifications, regulations and standards in the field of artificial intelligence.
  • CG8 Perform an ethical exercise of the profession in all its facets, applying ethical criteria in the design of systems, algorithms, experiments, use of data, in accordance with the ethical systems recommended by national and international organizations, with special emphasis on security, robustness , privacy, transparency, traceability, prevention of bias (race, gender, religion, territory, etc.) and respect for human rights.
  • CG9 To face new challenges with a broad vision of the possibilities of a professional career in the field of Artificial Intelligence. Develop the activity applying quality criteria and continuous improvement, and act rigorously in professional development. Adapt to organizational or technological changes. Work in situations of lack of information and / or with time and / or resource restrictions.
  • CE01 To be able to solve the mathematical problems that may arise in the field of artificial intelligence. Apply knowledge from: algebra, differential and integral calculus and numerical methods; statistics and optimization.
  • CE02 To master the basic concepts of discrete mathematics, logic, algorithmic and computational complexity, and its application to the automatic processing of information through computer systems . To be able to apply all these for solving problems.
  • CE03 To identify and apply the basic algorithmic procedures of computer technologies to design solutions to problems by analyzing the suitability and complexity of the proposed algorithms.
  • CE04 To design and use efficiently the most appropriate data types and structures to solve a problem.
  • CE05 To be able to analyze and evaluate the structure and architecture of computers, as well as the basic components that make them up.
  • CE06 To be able to identify the features, functionalities and structure of Operating Systems and to design and implement applications based on their services.
  • CE07 To interpret the characteristics, functionalities and structure of Distributed Systems, Computer Networks and the Internet and design and implement applications based on them.
  • CE08 To detect the characteristics, functionalities and components of data managers, which allow the adequate use of them in information flows, and the design, analysis and implementation of applications based on them.
  • CE09 To ideate, design and integrate intelligent data analysis systems with their application in production and service environments.
  • CE10 To analyze, design, build and maintain applications in a robust, secure and efficient way, choosing the most appropriate paradigm and programming languages.
  • CE11 To identify and apply the fundamental principles and basic techniques of parallel, concurrent, distributed and real-time programming.
  • CE12 To master the fundamental principles and models of computing and to know how to apply them in order to interpret, select, assess, model, and create new concepts, theories, uses and technological developments related to artificial intelligence.
  • CE13 To evaluate the computational complexity of a problem, identify algorithmic strategies that can lead to its resolution and recommend, develop and implement the one that guarantees the best performance in accordance with the established requirements.
  • CE14 To master the foundations, paradigms and techniques of intelligent systems and to analyze, designing and build computer systems, services and applications that use these techniques in any field of application, including robotics.
  • CE15 To acquire, formalize and represent human knowledge in a computable form for solving problems through a computer system in any field of application, particularly those related to aspects of computing, perception and performance in intelligent environments or environments.
  • CE16 To design and evaluate human-machine interfaces that guarantee the accessibility and usability of computer systems, services and applications.
  • CE17 To develop and evaluate interactive systems and presentation of complex information and its application to solving human-computer and human-robot interaction design problems.
  • CE18 To acquire and develop computational learning techniques and to design and implement applications and systems that use them, including those dedicated to the automatic extraction of information and knowledge from large volumes of data.
  • CE19 To use current computer systems, including high-performance systems, for the processing of large volumes of data from the knowledge of its structure, operation and particularities.
  • CE20 To select and put to use techniques of statistical modeling and data analysis, assessing the quality of the models, validating and interpreting.
  • CE21 To formulate and solve mathematical optimization problems.
  • CE22 To represent, design and analyze dynamic systems. To acquire concepts such as observability, stability and controllability.
  • CE23 To design controllers for dynamic systems that represent temporary physical phenomena in a real environment.
  • CE24 To ideate, design and build intelligent robotic systems to be applied in production and service environments, and that have to be capable of interacting with people. Also, to create collaborative and social intelligent robotic systems.
  • CE25 To ideate, design and integrate mobile robots with autonomous navigation capability, fleet formation and interaction with humans.
  • CE26 To design and apply techniques for processing and analyzing images and computer vision techniques in the area of artificial intelligence and robotics
  • CE27 To design and apply speech processing techniques, speech recognition and human language comprehension, with application in social artificial intelligence.
  • CE28 To plan, ideate, deploy and direct projects, services and systems in the field of artificial intelligence, leading its implementation and continuous improvement and assessing its economic and social impact.
  • CE29 To elaborate, present and defend before a university committee an original exercise carried out individually, consisting of a project in the field of artificial intelligence, in which the skills acquired all along the courses in the career teaching are synthesized and integrated.
  • K1 Recognize the basic principles of biology, from cellular to organism scale, and how these are related to current knowledge in the fields of bioinformatics, data analysis, and machine learning; thus achieving an interdisciplinary vision with special emphasis on biomedical applications.
  • K2 Identify mathematical models and statistical and computational methods that allow for solving problems in the fields of molecular biology, genomics, medical research, and population genetics.
  • K3 Identify the mathematical foundations, computational theories, algorithmic schemes and information organization principles applicable to the modeling of biological systems and to the efficient solution of bioinformatics problems through the design of computational tools.
  • K4 Integrate the concepts offered by the most widely used programming languages in the field of Life Sciences to model and optimize data structures and build efficient algorithms, relating them to each other and to their application cases.
  • K5 Identify the nature of the biological variables that need to be analyzed, as well as the mathematical models, algorithms, and statistical tests appropriate to develop and evaluate statistical analyses and computational tools.
  • K6 Recognize the ethical problems that arise from advances in the knowledge and in the application of biological concepts and their computational processing.
  • K7 Analyze the sources of scientific information, valid and reliable, to justify the state of the art of a bioinformatics problem and to be able to address its resolution.
  • S1 Integrate omics and clinical data to gain a greater understanding and a better analysis of biological phenomena.
  • S2 Computationally analyze DNA, RNA and protein sequences, including comparative genome analyses, using computation, mathematics and statistics as basic tools of bioinformatics.
  • S3 Solve problems in the fields of molecular biology, genomics, medical research and population genetics by applying statistical and computational methods and mathematical models.
  • S4 Develop specific tools that enable solving problems on the interpretation of biological and biomedical data, including complex visualizations.
  • S5 Disseminate information, ideas, problems and solutions from bioinformatics and computational biology to a general audience.
  • S6 Identify and interpret relevant data, within the area of study, to make judgments that include social, scientific or ethical reflections.
  • S7 Implement programming methods and data analysis based on the development of working hypotheses within the area of study.
  • S8 Make decisions, and defend them with arguments, in the resolution of problems in the areas of biology, as well as, within the appropriate fields, health sciences, computer sciences and experimental sciences.
  • S9 Exploit biological and biomedical information to transform it into knowledge; in particular, extract and analyze information from databases to solve new biological and biomedical problems.
  • S10 Use acquired knowledge and the skills of bioinformatics problem solving in new or unfamiliar environments within broader (or multidisciplinary) contexts related to bioinformatics and computational biology.

COMPETENCES

  • C1 Apply knowledge in an integrated way in work or vocation in a professional manner and adopt behaviors in accordance with an ethical and responsible professional practice, taking into account the human and fundamental rights of people and respecting the principles of universal accessibility.
  • C2 Identify the complexity of the economic and social phenomena typical of the welfare society and relate welfare to globalization, sustainability and climate change in order to use technique, technology, economy and sustainability in a balanced and compatible way.
  • C3 Communicate orally and in writing with others in the English language about learning, thinking and decision making outcomes.
  • C4 Work as a member of an interdisciplinary team, either as an additional member or performing managerial tasks, in order to contribute to the development of projects (including business or research) with pragmatism and a sense of responsibility and ethical principles, assuming commitments taking into account the available resources.
  • C5 Unify the acquisition, structuring, analysis and visualization of data and information in the field of specialty, critically assessing the results of such management.
  • C6 Detect deficiencies in the own knowledge and overcome them through critical reflection and the choice of the best action to expand this knowledge.
  • C7 Detect, from within the scope of the degree, inequalities based on sex and gender in society; integrate the different needs and preferences based on sex and gender in the design of solutions and problem solving.
  • C8 Work in the formulation, design and management of projects in the field of bioinformatics.

ÀMBITS DE TREBALL DEL TFM

  • 1. TAC (TIC) Projectes centrats en el programa Educat 1x1, elaboració de materials digitals, etc
  • 2. Didàctica específica Projectes que treballen aspectes relacionats amb el currículum de l'especialitat: elaboració de materials o recursos per l'assignatura, programació de crèdits en anglès, projectes a l'aula de tecnologia, etc
  • 3. Psicopedagogia Aspectes d'atenció a la diversitat, relacions centre-entorn, aules d'acollida, motivació, tutoria, etc
  • 4. Organització de centres Disseny de projectes que impacten en l'organització del centre de forma transversal com: temes relacionats amb els riscos laborals, qualitat, etc
  • 5. Altres Treballs de síntesi, projectes o treballs de recerca

COMPETÈNCIES DEL TFM

  • CT1 Conocer y analizar los contenidos curriculares de la especialidad en torno a los procesos de enseñanza y aprendizaje
  • CT2 Planificar, desarrollar y evaluar el proceso de enseñanza y aprendizaje de las respectivas enseñanzas
  • CT3 Buscar, obtener y procesar recursos de información (oral, escrita, digital, multimedia) y aplicarla en los procesos de enseñanza y aprendizaje en las materias propias de la especialidad
  • CT4 Proponer una programación del currículo y desarrollar y aplicar metodologías didácticas
  • CT5 Diseñar y desarrollar espacios de aprendizaje
  • CT6 Adquirir estrategias para estimular el esfuerzo del estudiante
  • CT7 Conocer los procesos de interacción y comunicación en el aula
  • CT8.1 Diseñar y realizar actividades formales y no formales de aprendizaje
  • CT8.2 Desarrollar funciones de tutoría y de orientación
  • CT8.3 Participar en la evaluación, investigación y la innovación de los procesos de enseñanza y aprendizaje
  • CT9 Conocer y analizar la normativa y organización institucional del sistema
  • CT10 Conocer y analizar las características históricas de la profesión
  • CT11 Diseñar procesos de información y asesoría a las familias

Generic Technical Competences

Technical competences, technical competences of each specialization, specialization compulsory subjects.

  • Advanced 3d Modeling (A3DM-MIRI)
  • Fast Realistic Rendering (FRR-MIRI)
  • Geometric Tools for Computer Graphics (GTCG-MIRI)
  • Virtual and Augmented Reality (VAR-MIRI)

Specialization complementary subjects

  • Computer Animation (CA-MIRI)
  • Geometry Processing (GPR-MIRI)
  • Scalable Rendering for Graphics and Game Engines (SRGGE-MIRI)
  • Scientific Visualization (SV-MIRI)

Seminar activities

The activities to obtain the SIRI credits can be done in any semester of the master's degree.  Consult the detail of the seminars .

Where we are

[email protected]

computer graphics bachelor thesis

Contact with us

© Facultat d'Informàtica de Barcelona - Universitat Politècnica de Catalunya - Website Disclaimer - Privacy Settings

IMAGES

  1. Computer Game Design (BS)

    computer graphics bachelor thesis

  2. Computer Graphic Arts

    computer graphics bachelor thesis

  3. Bachelor's Degree in Computer Graphics Technology

    computer graphics bachelor thesis

  4. Bachelor’s Degree in Computer Graphics and Animation

    computer graphics bachelor thesis

  5. Computer Graphics: Previous Question Paper with Answer,Aktu Notes

    computer graphics bachelor thesis

  6. Bachelor of Computer Science (Graphics and Multimedia Software) with

    computer graphics bachelor thesis

VIDEO

  1. Introduction To Computer Graphics Lecture(1) ~Dr-Nader Elshahat

  2. GRAPHICS FINAL THESIS DISPLAY 2023 AT JUW/ Narmeen Navaid

  3. How to write thesis for Bachelor/Master/M.Phil/PhD

  4. 💥TriFold Bachelor ⚡Design In Corel Draw #graphicdesign #trendingshorts #trending #short

  5. Screen-Space Directional Occlusion

  6. Digital Design & Computer Arch.

COMMENTS

  1. Theses and Dissertations

    Master's thesis, Cornell University, 1992. Peter W. Pruyn. An exploration of three dimensional computer graphics in cockpit avionics. Master's thesis, Cornell University, 1992. Mark C. Reichert. A two-pass radiosity method driven by lights and viewers position. Master's thesis, Cornell University, 1992.

  2. Theses

    Theses. SEPs and Theses. Within the curriculum of Computer Science students are required to pursue a SEP and a thesis. Both types consist of working on a scientific research topic on your own and present the results within a written essay. For the SEPs and bachelor thesis, the focus is on practical implementation of some research material.

  3. Bachelor Thesis

    The length of the thesis should be between 15 and 25 pages. The structure should be like a thesis (i.e., containing introduction, state-of-the-art, description of method, results, conclusions, and references). Use the LaTeX template by the faculty. There are hints on writing the thesis text. Be sure to follow the Code of Ethics.

  4. (PDF) Advancements and Applications of Computer Graphics: A

    Computer graphics have profoundly transformed various fields, including design, art, education, entertainment, and scientific visualization. The study thoroughly examines the historical ...

  5. Theses

    Bachelor and Master Theses. We permanently offer proposals for bachelor and master thesis projects in all areas across our research activities (see our publication page) and related subjects which cover most topics in Computer Graphics. The thesis topics are usually specified in cooperation with one of our research assistants and/or Prof. Kobbelt taking into account the student's individual ...

  6. PDF Computer Graphics in Cinematography

    Computer graphics and visual effects are an essential part of the commercials and movie industry nowadays. The development of computer graphics (CG later in the text) dramatically rocketed up since the first computer animation effect was used in a movie. The industry of cinematography can hardly be imagined without visual effects and

  7. Scientific Visualization and Computer Graphics

    give details on your technique/method and interesting (new) implementation aspects (e.g., discussing a class structure may be important for software engineering or a software documentation but not so much for graphics/visualization) use illustrations to clarify your concept and realization (e.g., concept sketches, screenshots, pictures of your ...

  8. Thesis Topics

    Computer Graphics Topics. 3D self-localization and mapping with a dynamic 3D scanner. You are given a dynamic 3D point cloud scanner, such as a Microsoft "Kinect". ... The topic could be formed into a bachelor or master thesis. Machine learning and deep networks for coarse-graining in multi-scale simulations.

  9. Topics for Projects and Theses

    Betreuung/Supervision. For more information about diploma theses, projects, and bachelor theses please see the respective pages - this page just lists topics for these projects.. The best way to obtain a topic for a Computer Science Project, a Bachelor Thesis or a Diploma Thesis is to contact the supervisor of one of the topics listed below by email.

  10. Theses

    Bachelor and Master Theses We are offering topics for Bachelor and Master thesis in different areas of computer graphics and computer vision. Please contact Prof. Dr.-Ing. H. Lensch via email .

  11. Ideas for Undergraduate Thesis Topic in Computer Graphics

    I got extremely interested in computer graphics the past year and I want to pursue a Ph.D. in graphics. I am particularly interested in physical-based simulation/rendering and basically anything that looks realistic and cool. I want to do a senior year thesis in the field to get some experience as well as boost my application for grad school.

  12. Theses

    A list of completed theses and new thesis topics from the Computer Vision Group. Are you about to start a BSc or MSc thesis? Please read our instructions for preparing and delivering your work. PhD Theses Master Theses Bachelor Theses Thesis Topics. Novel Techniques for Robust and Generalizable Machine Learning. PDF Abstract.

  13. RPI Computer Graphics

    Computer Graphics @ RPI: Home. People Faculty Graduate students Undergraduate students Alumni Publications Journal Articles Conference Papers Posters. Theses Ph.D. Theses Master's Theses Bachelor's Theses. Projects. Events Colloquia & Seminars. Courses. Group Activities. Hiking Trips Conferences ... Master's Thesis July 2009 Bachelor's Theses.

  14. CGL @ ETHZ

    Semester, Bachelor and Master Theses. We are working on a large variety of topics in the field of computer graphics / machine learning and related areas: Physics-based animation, rendering, geometric modeling, computational materials, computer-aided learning, medical simulations, display technology, as well as image and video-based techniques.

  15. [PDF] Creating guidelines for game character designs : Bachelor thesis

    This thesis will address the subject of character design for games by developing a method for creating a design template that one can use as basic guidelines when designing a character. ... Bachelor thesis in the subject of computer graphics arts @inproceedings{Lundwall2017CreatingGF, title={Creating guidelines for game character designs ...

  16. How to write a thesis

    Bad: It is a well-known problem in computer graphics that this interface limits the possibilities, which is why some data, such as textures, ... Figure 1 Schedule for a Bachelor's Thesis.. Create a schedule and stick to it (for example, as shown in Figure 1). Also plan buffer time for unforeseeable events and don't let holidays, such as ...

  17. Thesis

    Bachelor: 25.05.2023: Bonzcek, Lars: Visualizing Non-Euclidean 3D Polyhedral Spaces Using Portals: Bachelor: 21.04.2023: Gerhardt, Jakob: Exploration of a Backtracking Algorithm on A Tiled Many-Core Processor: Using The Graphcore IPU to Accelerate A Graph Coloring Problem: Bachelor: 12.04.2023: Reinke, Felix: Phong Tessellation in Higher ...

  18. Thesis & Project Topics

    Keywords: Bachelor Thesis • Deep Learning • Machine Learning • optimisation • taken • texture; AI for Content Creation; ... Computer Graphics Group Charles University, KSVI, MFF. Malostranské náměstí 25 118 00 Prague 1. Czech Republic. Computer Graphics Group, 2024.

  19. PDF Real-Time Set Editing in a Virtual Production Environment with an

    Parts of this thesis are written in line with an EU co-funded project Dreamspace (cf. Dream-space Project). Dreamspace is a three year running project with the goal to research, develop and demonstrate tools to allow creative professionals to collaborate and combine live perform-ances, video and computer-generated imagery in real-time.

  20. TU Wien

    Write a small thesis. The length of the thesis should be between 15 and 25 pages. The structure should be like a thesis (i.e., containing introduction, state-of-the-art, description of method, results, conclusions, and references). Use the LaTeX template by the faculty. There are hints on writing the thesis text. Be sure to follow the Code of ...

  21. Thesis Informatics

    Bachelor's thesis / Master´s thesis. From 15 January 2024, all final theses in the School of Computation, Information and Technology will be managed via the CIT portal. Once you have found a topic and a supervising chair for your thesis, you will be registered by the supervising chair. You will receive an e-mail asking you to confirm your ...

  22. Department of Computer Graphics Technology Degree Theses

    "The Department of Computer Graphics Technology (CGT) offers the Master of Science degree with a thesis option. Students may choose courses that deal with virtual and augmented reality, product lifecycle management, and interactive media research." Below are some degree theses on the aforementioned subjects and topics.

  23. Computer Graphics and Virtual Reality

    Technical Competences of each Specialization COMPUTER GRAPHICS AND VIRTUAL REALITY. CEE1.1 Capability to understand and know how to apply current and future technologies for the design and evaluation of interactive graphic applications in three dimensions, either when priorizing image quality or when priorizing interactivity and speed, and to understand the associated commitments and the ...