Complete one of the following options:
Code | Title | Hours |
---|---|---|
Complete 12 semester hours from the elective course list below. | 12 |
Code | Title | Hours |
---|---|---|
Master’s Project | 4 | |
Complete 8 semester hours from the elective course list below.* | 8 | |
*Students in Vancouver complete and 4 semester hours from the approved electives below. |
Code | Title | Hours |
---|---|---|
Master’s Project | 4 | |
Thesis | 4 | |
Complete 4 semester hours from the elective course list below.** | 4 | |
In addition to completing the thesis course, students must successfully complete the thesis submission process, including securing committee and Graduate School of Engineering signatures and submission of an electronic copy of their MS thesis to ProQuest. | ||
*Students in Vancouver complete twice for a total of 8 semester hours. | ||
**Students in Vancouver complete in lieu of an elective. |
Code | Title | Hours |
---|---|---|
Complete the following. Students must complete to qualify for co-op experience: | ||
Introduction to Cooperative Education | 1 | |
Co-op Work Experience | 0 | |
or | Co-op Work Experience - Half-Time | |
or | Co-op Work Experience Abroad - Half-Time | |
or | Co-op Work Experience Abroad |
Any course in the following list will serve as an elective course, provided the course is offered and the student satisfied prerequisites and program requirements. Students can take electives outside this list with a prior approval from the faculty advisor.
Code | Title | Hours |
---|---|---|
General Engineering | ||
Product Development for Engineers | ||
Civil Engineering and Environmental Engineering | ||
Time Series and Geospatial Data Sciences | ||
Computer Science | ||
Foundations of Artificial Intelligence | ||
Game Artificial Intelligence | ||
Database Management Systems | ||
Computer Graphics | ||
Pattern Recognition and Computer Vision | ||
Robotic Science and Systems | ||
Algorithms | ||
Natural Language Processing | ||
Machine Learning | ||
Information Retrieval | ||
Fundamentals of Cloud Computing | ||
Data Science | ||
Introduction to Programming for Data Science | ||
Introduction to Data Management and Processing | ||
Supervised Machine Learning and Learning Theory | ||
Unsupervised Machine Learning and Data Mining | ||
Electrical and Computer Engineering | ||
Introduction to Machine Learning and Pattern Recognition | ||
Advanced Machine Learning | ||
Engineering Management | ||
Engineering Project Management | ||
Economic Decision Making | ||
Financial Management for Engineers | ||
Health Informatics | ||
Introduction to Health Informatics and Health Information Systems | ||
Data Management in Healthcare | ||
Theoretical Foundations in Personal Health Informatics | ||
Evaluating Health Technologies | ||
Business of Healthcare Informatics | ||
Improving the Patient Experience through Informatics | ||
Management Issues in Healthcare Information Technology | ||
Introduction to Health Data Analytics | ||
Industrial Engineering | ||
Healthcare Systems Modeling and Analysis | ||
Manufacturing Methods and Processes | ||
Human Performance | ||
Supply Chain Engineering | ||
Simulation Analysis | ||
Intelligent Manufacturing | ||
Statistical Methods in Engineering | ||
Statistical Quality Control | ||
Reliability Analysis and Risk Assessment | ||
Applied Reinforcement Learning in Engineering | ||
Statistical Learning for Engineering | ||
Sociotechnical Systems: Computational Models for Design and Policy | ||
Applied Natural Language Processing in Engineering | ||
Neural Networks and Deep Learning | ||
Information Systems | ||
Advances in Data Sciences and Architecture | ||
Mathematics | ||
Introduction to Mathematical Methods and Modeling | ||
Optimization and Complexity | ||
Machine Learning and Statistical Learning Theory 1 | ||
Statistics for Bioinformatics | ||
Mathematical Statistics | ||
Applied Statistics | ||
Regression, ANOVA, and Design | ||
Network Science | ||
Network Science 2 | ||
Network Economics | ||
Bayesian and Network Statistics | ||
Operations Research | ||
Metaheuristics and Applications | ||
Probabilistic Operation Research | ||
Integer and Nonlinear Optimization | ||
Network Analysis and Advanced Optimization | ||
Convex Optimization and Applications | ||
Logistics, Warehousing, and Scheduling | ||
Physics | ||
Network Science 1 | ||
Public Policy and Urban Affairs | ||
Dynamic Modeling for Environmental Decision Making | ||
Big Data for Cities | ||
Geographic Information Systems for Urban and Regional Policy | ||
Advanced Spatial Analysis of Urban Systems |
32 total semester hours required (33 with optional co-op) Minimum 3.000 GPA required
Coursework option is not available to students in Vancouver.
A thesis is required for all students who receive financial support from the university in the form of a research, teaching, or tuition assistantship. The thesis topic should cover one or more of the areas from statistics, mathematics, optimization, data mining, machine learning, database design, Big Data, visualization tools, or forecasting methods. The thesis should train students for research in data and operations analytics and/or prepare them for a doctoral program.
Approved elective for students in Vancouver.
Send Page to Printer
Print this page.
Download Page (PDF)
The PDF will include all information unique to this page.
2023-24 Undergraduate Day PDF
2023-24 CPS Undergraduate PDF
2023-24 Graduate/Law PDF
2023-24 Course Descriptions PDF
Using analytics to foster business success.
Through Northeastern’s interdisciplinary Master of Science, MS, Data Analytics Engineering program, you will build on your existing engineering or science foundation to gain employment at businesses of all kinds to improve their products, processes, systems, and enterprises, all through the power of data mining, data management, machine learning, and visualization.
You’ll have the flexibility to tailor your degree to your professional goals through a number of electives across Northeastern in areas including business, engineering, healthcare, manufacturing, and urban communities, computer science, and information systems.
The masters in data analytics engineering is ideal for those who wish to develop expertise in the analysis and optimization of data to solve problems and support decision-making. For those more interested in the process of generating, collecting, and mining data by developing algorithms and computational tools, Northeastern’s Khoury College of Computer Sciences also offers a Master of Science in Data Science .
Irrespective of your engineering major, you will gain rigorous analytical skills and research experience through technically advanced core courses in operations research, statistics, data mining, database management, and visualization. You can further specialize your degree with flexible electives from diverse disciplines across colleges at Northeastern in areas such as:
Upon graduation with your MS in Data Analytics Engineering degree, you’ll be prepared to take on a data analyst position in any industry or to enter a doctoral program in areas including engineering healthcare, business, finance, security, or sustainability.
You’ll gain valuable research experience desired by employers by designing and developing analytics projects in both individual and group settings. Some recent research projects have included:
The MS in Data Analytics Engineering is available at Northeastern’s campus in Boston, MA as well as at our campuses in Seattle, WA and Vancouver, Canada .
The research project/thesis course is a requirement for students taking the DAE program in Vancouver.
Note: A subset of program courses are available at campuses outside of Boston.
Over 15 graduate certificates are available to provide students the opportunity to develop a specialization in an area of their choice. Certificates can be taken in addition to or in combination with a master’s degree, or provide a pathway to a master’s degree in Northeastern’s College of Engineering. Master’s programs can also be combined with a Gordon Engineering Leadership certificate. Students should consult with their faculty advisor regarding these options.
Students may complete a Master of Science in Data Analytics Engineering in addition to earning a Graduate Certificate in Engineering Leadership . Students must apply and be admitted to the Gordon Engineering Leadership Program in order to pursue this option. The program requires fulfillment of the 16-semester-hour curriculum required to earn the Graduate Certificate in Engineering Leadership, which includes an industry-based challenge project with multiple mentors. The integrated 32-semester-hour degree and certificate will require 16 semester hours of advisor-approved data analytics technical courses.
Students may complete a Master of Science in Data Analytics Engineering in addition to earning a Graduate Certificate in Engineering Business. Students must apply and be admitted to the Galante Engineering Business Program in order to pursue this option. The program requires the applicant to have earned or be in a program to earn a Bachelor of Science in Engineering from Northeastern University. The integrated 32-semester-hour degree and certificate will require 16 semester hours of the data analytics engineering core courses and 16 semester hours from the outlined business-skill curriculum. The coursework, along with participation in co-curricular professional development elements, earn the Graduate Certificate in Engineering Business .
Northeastern combines rigorous academics with experiential learning and research to prepare students for real-world engineering challenges. The Cooperative Education Program , also known as a “co-op,” is one of the largest and most innovative in the world, and Northeastern is one of only a few that offers a Co-op Program for graduate students. Through this program students gain professional experience as part of the academic curriculum employed in their field of interest, giving them a competitive advantage upon graduation.
While on co-op, students spend a 4-, 6-, or 8-month placement working in industries ranging from finance and technology to energy and healthcare here in Boston and across the country. Recent MS in Data Analytics Engineering co-op partners include Amazon, Apple, Fidelity Investments, Fraunhofer USA, Wayfair, Roku, IBM, Grantham, Mayo, Van Otterloo & Co, LLC, Commonwealth of Massachusetts, Natixis Investment Managers, State Street Global Services, and McKinsey & Company, Inc.
The Master of Science (MS) in Data Analytics Engineering is designed to help students acquire knowledge and skills to:
This degree program seeks to prepare students for a comprehensive list of tasks including collecting, storing, processing, and analyzing data; reporting statistics and patterns; drawing conclusions and insights; and making actionable recommendations.
The in-demand field of data analytics opens career doors around the globe and across industries, including healthcare, smart manufacturing, supply chain and logistics, national security, defense, banking, finance, marketing, and human resources.
Demand for data professionals has never been higher. Employment of operations research analysts is projected to grow 23 percent from 2021 to 2031, much faster than the average for all occupations, and the median annual wage is $83,000, upwards to $160,000 according to the U.S. Bureau of Labor Statistics, May 2021.
The Academic Advisors in the Graduate Student Services office can help answer many of your questions and assist with various concerns regarding your program and student record. Use the link below to also determine which questions can be answered by your Faculty Program Advisors and OGS Advisors.
Ready to take the next step? Review degree requirements to see courses needed to complete this degree. Then, explore ways to fund your education. Finally, review admissions information to see our deadlines and gather the materials you need to Apply.
Through continued interest in the field, Tejas Karwa, E’23, industrial engineering, MS’24, data analytics engineering, joined the Galante Engineering Business Program to expand his knowledge of the engineering industry and broaden his career prospects.
Twenty-three engineering graduate students were inducted into the newly established Lux. Veritas. Virtus. society, a prestigious honor that recognizes exceptional graduate students who exemplify the university’s mission, ideals, and values.
Sixteen students from the College of Engineering were selected as 2024 members of the Northeastern University “Huntington 100,” which is a group of students selected for their outstanding achievements which commensurate with the university’s mission, ideals, values, and Academic plan.
After completing an undergraduate degree in mathematics in her home of New Delhi, Ishpreet Kaur Sethi, MS’23, data analytics engineering, came to Northeastern’s College of Engineering to earn a master’s degree and maximize the co-op experience by working as a data analyst at the UK-based Alchemab Therapeutics. Now she is poised to pursue global opportunities.
When you are interested in writing a Bachelor's or Master's thesis, please send me an e-mail (maximilian.schuele(at)uni-bamberg.de) with your transcript of records attached and your preferred topic wish.
Sql++ extending database systems by building blocks.
The idea is to expose building blocks of database systems, for example hash tables, for data mining and machine learning algorithms. The thesis should build upon an existing open-source database system, for example Hyrise, where you should first write selected algorithms in SQL. Afterwards, you should improve the performance of the algorithms by adding certain suboperators to the database system. For example, you can perform gradient descent iteratively using recursive CTEs. Then you create an operator for iterations, for example trampolin, with less memory consumption. The building blocks should be accessed through an extension of SQL by user-defined functions (UDFs), calles SQL++.
LeanStore is an open-source system for OLTP and OLAP transaction but lacks an SQL interface. The goal of this thesis is to write a query compiler in C++.
Modern database system generate code instead of interpreting function call for an operator trees. In this thesis, you have to generate code to run on GPUs and investigate, how SIMT (single instruction mutliple threads) will accelerate query processing.
You have to implement a protocol to access camera data for crowd detection.
Usage of higher-order lambda functions.
For this topic, you have to ellaborate use cases for higher-order lambda functions in SQL. Lambda functions exist only as functions to customise database operators but the plan is to extend database systems by higher-order functions. A use case would be a ModelDB: a relation that stores different models that should be executed at runtime. In your thesis, you have to invent use cases for higher-order SQL lambda functions based on existing models for data pipelines.
Write a NumPy to SQL compiler based on a relational representation of matrices to compute matrix algebra in database systems.
Write or generate SQL queries in order to solve mathematical equation in SQL.
Generate the SQL code for automatically deriving model functions to train neural networks in SQL.
Terms of use, description, date issued, collections.
Show Statistical Information
Photo: Sarah Buth
We offer a variety of cutting-edge and exciting research topics for Bachelor's and Master's theses. We cover a wide range of topics from Data Science, Natural Language Processing, Argument Mining, the Use of AI in Business, Ethics in AI and Multimodal AI. We are always open to suggestions for your own topics, so please feel free to contact us. We supervise students from all disciplines of business administration, business informatics, computer science and industrial engineering.
Example topics could be:
Q1: How many pages do I need to write?
A: In general, the number of pages is only a poor indicator of the quality of a thesis. However, as a rule of thumb, bachelor theses should have around 30 pages, while master theses should be around 60 pages of main content (that is, without the appendix and lists of tables, symbols, figures, references etc.).
Q2: How often should I meet with my supervisor?
A: Your supervisors are typically very busy people. However, don't hesitate to ask in case you have questions. For instance, if you are unsure of some requirements, or in case you have methodological problems, it is absolutely necessary to talk to your supervisor. As a rule of thumb, you should meet at least three times (once in the beginning, once in the middle, and once before the submission).
Q3: Am I allowed to use any AI models in the process of writing my thesis?
A: In general, we neither forbid nor recommend the use of AI for writing support. However, if you use AI, please inform your supervisor. Also, you need to adhere to the recommendations on the use of AI writing assistants given by the faculty.
Q4: How much time do I have?
A: The exact timing is dependent on your study program! Thus, please check the examination requirements before the official start of your thesis -- you are responsible for sticking to the rules.
The MS in Data Science and Engineering program offers three options, each requiring a total of 30 credits. The minimum requirements for all options are
Want more info.
We're so glad you're interested in UNT! Let us know if you'd like more information and we'll get you everything you need.
Considering a career in one of technology’s most in-demand fields? The College of Engineering’s new Master of Science in Data Engineering may be for you. According to the Dice 2020 Tech Job Report , jobs in data engineering are growing by 50% each year, making data engineering the fastest-growing job in technology.
The Master of Science program in Data Engineering allows you to focus on your analytical, programming and engineering skills to:
Core courses give students a solid foundation in data engineering and will dive into big data, data analytics, data visualization and database systems.
Concentration areas focus on data engineering and biomedical engineering.
Students who graduate from this program will be able to:
What can you do with a degree in data engineering.
While estimates vary, a recent report from O’Reilly states companies typically need a minimum 2-3 data engineers for every data scientist to successfully complete projects, and the current job market is struggling to keep up with this demand. Our M.S. in Data Engineering program prepares students to enter this thriving job market right out of college.
The most in-demand jobs are data engineers, data architects, business intelligence architects, machine learning engineers, and data warehouse engineers/developers, with many data engineers working in many different fields.
Explore more options.
College of Engineering
Artificial Intelligence Master's
Computer Science Master's
Department of Computer Science and Engineering
It’s easy to apply online. Join us and discover why we’re the choice of nearly 47,000 students.
Kiva (32-G449)
By: Andrew Ilyas
Thesis Supervisors: Costis Daskalakis, Aleksander Madry
Abstract: Despite their impressive performance, training and deploying ML models is currently a somewhat messy affair. But does it have to be? In this defense, I’ll discuss some of my research on making ML “predictably reliable”—enabling developers to know when their models will work, when they will fail, and why. To begin, we use a case study of adversarial examples to show that human intuition can be a poor predictor of how ML models operate. Motivated by this, we present a few lines of work that aim to develop a precise understanding of the entire ML pipeline: from how we source data, to the datasets we train on, to the learning algorithms to use.
Writing a thesis is the final step in obtaining a Bachelor or Master degree. A thesis is always coupled to a scientific project in some field of expertise. Candidates who want to write their thesis in the Big Data Analytics group should, therefore, be interested and trained in a field related to our research areas .
A thesis is an independent, scientific and practical work. This means that the thesis and its related project are conducted exclusively by the candidate; the execution follows proper scientific practices; and all necessary artifacts, algorithms and evaluations have been physically implemented and submitted as part of the thesis. A proper way of sharing code and evaluation artifacts is the creation of a public GitHub repository, which can, then, be referenced in the thesis. The thesis serves as a documentation for the project and as scientific analysis and reflection of the gathered insights.
For students interested in a thesis, we offer interesting topics and a close, continuous supervision during the entire thesis time. Every thesis is supervised by at least one member of our team, who can give advice and help in critical situations. The condensed results of our best master theses have been published at top scientifc venues, such as VLDB, CIKM, EDBT, etc.
A selection of open thesis topics can be found on this page. We also encourage interested students to suggest own ideas in the context of our research areas and to contact individual members of the group directly. An ideal thesis topic is connected in some form to the research projects of a group member. That group member will then become a supervisor for the thesis. Hence, taking a look at the personal pages and our current projects is a good starter for a thesis project. Recent publications on conferences, such as VLDB or SIGMOD , or open research challenges on, for example, Kaggle are good resources for finding interesting thesis ideas.
KDNuggets, a community site for data professionals, ranked “ We Don’t Need Data Scientists, We Need Data Engineers ,” by Mihail Eric, a venture capitalist, researcher, and educator, as its top story of 2021. This sentiment holds even more true today, especially with the unending rush to leverage both generative and predictive AI within enterprise operations. Without the right kind of data, AI is dead in the water.
Data engineering—which includes not only data engineers by title but also their counterparts in adjacent fields such as database administration, management, architecture, and analysis—will ensure that AI initiatives are kept alive, well, and thriving. Accordingly, data engineers have risen to become the new stars in the AI-driven organization. For the purposes of this article, we assign roles in data engineering across a cluster of data professional categories. Collectively, as part of an overall data engineering team, these professionals are setting the tone and providing the guidance needed for developing fair, accurate, and business-viable AI models.
Everyone wants to embrace AI-related large language models in a big way, which only means more demand for data engineering. As a professional category, data engineers are essential, because in many cases, data scientists have been tasked with vetting and managing data resources, which diverts their time and resources away from building data-driven narratives for their businesses. In addition, since AI algorithms are tremendous data hogs, organizations need vibrant data pipelines to maintain the effectiveness of their AI efforts.
This is fueling significant shifts in the practice and theory of data engineering. Demand for real-time, AI-ready data is creating new challenges and opportunities for those in data engineering and adjacent fields such as database administration, management, and analysis. In the process, data engineering has entered the spotlight as enablers of the 21 st -century enterprise.
This also requires more business savvy as part of the data engineering skills mix. Conversely, business teams need to have a better understanding of their data, and what it can do for their organizations. “Data practitioners are being asked to expand their knowledge of the business—while functional teams are finding they require their own internal data expertise to leverage their data,” a recent report from MIT Technology Review Insights states.
In essence, organizations are leaning heavily on data-engineering teams to turn their data assets into gold. Considerations such as organizational structure, data platform and architecture, and data governance are all essential to this process, especially as AI gets involved. Data engineers and their related colleagues are the go-to people who can make this happen.
The role of data engineering teams has always been clear: to design, construct, and maintain data architectures and ensure the viability of data moving through the organization’s systems—and this remains the primary mission. This includes ensuring that data is available for applications when and where it is needed by the business.
Helpful practices and technologies have emerged to help data engineering teams deliver on this mission, such as DevOps, DataOps, AIOps, and collaborative pipeline tools. Automation has lifted many of the burdens of database preparation, data modeling, quality assurance, and backup and storage.
As a result, the roles of data engineering teams are being elevated, from backroom maintenance to the forefront of the business. Data engineering is evolving into a role that involves greater strategizing for businesses seeking to either monetize data, leverage data to gain advantage in their markets, or boost innovation. This also involves serving as guardians of the data, ensuring compliance, cybersecurity, and privacy. Importantly, data engineering means making sure the data is there and it is ready anytime it is needed. This new importance has resulted in “staggering growth in data engineering jobs,” the MIT report states.
The evolving nature of data engineering can be seen in recent job descriptions:
Key to all data engineering roles is ensuring that the business comes first, and that all activity is directly connected with business requirements. Today’s data engineer needs to be a technologist, leader, facilitator, and troubleshooter.
This calls for close collaboration with data scientists and AI training specialists to ensure their models are receiving the data required to support business decision-makers and decisioning systems. Just as importantly, data engineering teams need to work in tandem with data owners to ensure that the right data sources are being tapped, and with end users to ensure they are working with the best available information.
The challenge in today’s environment is to remove the obstacles and mitigate the challenges to effective data engineering—to ensure that data pipelines keep flowing, that much of the process is automated, is developed and operating collaboratively, and data insights are on target with business requirements.
Even if your job title is something other than “data engineer,” in many ways, everyone on the data team now has a role to play in ensuring the viability of a data-driven and AI-driven business.
Subscribe to Big Data Quarterly E-Edition
Thursday, April 18, 8:20am (EDT): Searching is temporarily offline. We apologize for the inconvenience and are working to bring searching back up as quickly as possible.
Advanced research and scholarship. Theses and dissertations, free to find, free to use.
Advanced search options
Browse by author name (“Author name starts with…”).
Find ETDs with:
in | ||
/ | ||
in | ||
/ | ||
in | ||
/ | ||
in |
Written in any language English Portuguese French German Spanish Swedish Lithuanian Dutch Italian Chinese Finnish Greek Published in any country US or Canada Argentina Australia Austria Belgium Bolivia Brazil Canada Chile China Colombia Czech Republic Denmark Estonia Finland France Germany Greece Hong Kong Hungary Iceland India Indonesia Ireland Italy Japan Latvia Lithuania Malaysia Mexico Netherlands New Zealand Norway Peru Portugal Russia Singapore South Africa South Korea Spain Sweden Switzerland Taiwan Thailand UK US Earliest date Latest date
Sorted by Relevance Author University Date
Only ETDs with Creative Commons licenses
Results per page: 30 60 100
October 3, 2022. OATD is dealing with a number of misbehaved crawlers and robots, and is currently taking some steps to minimize their impact on the system. This may require you to click through some security screen. Our apologies for any inconvenience.
See all of this week’s new additions.
OATD.org aims to be the best possible resource for finding open access graduate theses and dissertations published around the world. Metadata (information about the theses) comes from over 1100 colleges, universities, and research institutions . OATD currently indexes 7,202,573 theses and dissertations.
About OATD (our FAQ) .
We’re happy to present several data visualizations to give an overall sense of the OATD.org collection by county of publication, language, and field of study.
You may also want to consult these sites to search for other theses:
Uniquely interdisciplinary and flexible: coursework-only, project and thesis options.
The 30-credit Duke Master of Science in Electrical & Computer Engineering degree provides a unique combination of opportunities:
I was looking for that strong university-industry connection. That, along with the flexibility of the coursework, which gave me a lot more bandwidth for research, made Duke the best fit for me, in the end. Aniket Dalvi ’21 PhD Candidate at Duke University LinkedIn Logo
Requirements.
The Graduate School requires a final exam approved by a committee made up of three Graduate Faculty members. The committee must be approved by the Director of Graduate Studies and the Dean of the Graduate School at least one month prior to the examination date. The student is not required to generate a written document for the ECE department, and the format of the exam is determined by the department.
For the project option, a written research report and oral presentation are required to be presented to a committee made up of the student’s advisor and two other members of the graduate faculty, one of whom must be from a department other than ECE or outside the student’s main curricular area. The committee must be approved by the Director of Graduate Studies and the Dean of the Graduate School at least one month prior to the examination date. The formats of the written and oral project reports are determined by the student’s advisor. The project report is not submitted to the Graduate School; however, a final copy must be submitted to the ECE Department.
A written thesis must be uploaded by the guidelines presented in the Graduate School’s Guide for the Electronic Submission of Thesis and Dissertation , and the thesis must be defended orally before a committee composed of the faculty member under whose direction the work was done and at least two other members of the graduate faculty, one of whom must be from a department other than ECE or outside the student’s main curricular area. The committee must be approved by the Director of Graduate Studies and the Dean of the Graduate School at least one month prior to the examination date.
Want more information? Ready to join our community?
Senior Program Coordinator
Graduate Program Coordinator
Director of Master’s Studies, Professor in the Department of ECE
Master’s Program Coordinator
Meng in electrical & computer engineering, meng in photonics & optical sciences, introductory c programming specialization (online).
Explore all metrics
The surging popularity of generative adversarial networks (GANs) has ignited a wave of innovation in the realm of computer vision, a highly explored subfield of deep learning. GANs are revolutionizing the area of machine learning because they use a game-based training technique. This is in contrast to traditional approaches to machine learning, which center on feature learning and picture production. Several subfields of computer vision have seen tremendous progress thanks to the integration of numerous processing approaches, including image processing, dynamic processing, text, audio, and video processing, as well as generalized generative adversarial networks (GANs). Nevertheless, despite the fact that GANs have made great progress, they still offer promise that has not been fully realized and space for additional development. GANs have a wide range of applications within computer vision, including data augmentation, displacement recording, dynamic modeling, and image processing. This article digs into recent advances made by GAN researchers working in the realm of AI-based security and defense and discusses their accomplishments. In particular, we investigate how well image optimization, image processing, and image stabilization are incorporated into GAN-driven picture training. We want to achieve our goal of providing a complete overview of the present status of GAN research by carefully evaluating research articles that have been subjected to peer review.
This is a preview of subscription content, log in via an institution to check access.
Subscribe and save.
Price includes VAT (Russian Federation)
Instant access to the full article PDF.
Rent this article via DeepDyve
Institutional subscriptions
Explore related subjects.
This statement is Not applicable to the current study.
Adler J, Lunz S (2018) Banach wasserstein gan. Adv Neural Inf Process Syst 31:1049
Google Scholar
Antipov G, Baccouche M, Dugelay JL (2017) Face aging with conditional generative adversarial networks. In: 2017 ieee international conference on image processing (ICIP), pp 2089–2093. https://doi.org/10.1109/ICIP.2017.8296650
Arjovsky M, Bottou L (2017) Towards principled methods for training generative adversarial networks. https://doi.org/10.48550/ARXIV.1701.04862 , arXiv:1701.04862
Arjovsky M, Chintala S, Bottou L (2017) Wasserstein generative adversarial networks. In: Precup D, Teh YW (eds) Proceedings of the 34th international conference on machine learning, proceedings of machine learning research, vol. 70 PMLR, pp 214–223. https://proceedings.mlr.press/v70/arjovsky17a.html
Arora R, Zhang L, Pecht M (2020) Generative adversarial networks for electrical prognostics. Progn Health Manag 1(2):15–23
Baker J (1975) The dragon system-an overview. IEEE Trans Acoust Speech Signal Process 23(1):24–29. https://doi.org/10.1109/TASSP.1975.1162650
Article Google Scholar
Barnes C, Shechtman E, Finkelstein A et al (2009) PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Trans Graph (Proc SIGGRAPH) 28(3):24
Barua S, Erfani SM, Bailey J (2019) FCC-GAN: a fully connected and convolutional net architecture for gans. CoRR abs/1905.02417. arXiv:1905.02417
Bengio Y, Courville A, Vincent P (2013) Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell 35(8):1798–1828. https://doi.org/10.1109/TPAMI.2013.50
Berthelot D, Schumm T, Metz L (2017) Began: boundary equilibrium generative adversarial networks. https://doi.org/10.48550/ARXIV.1703.10717 , arXiv:1703.10717
Bharath K (2022) Implementing conditional generative adversarial networks. https://blog.paperspace.com/conditional-generative-adversarial-networks/ . Accessed 24 Oct 2023
Brock A, Lim T, Ritchie JM et al (2016) Neural photo editing with introspective adversarial networks. https://doi.org/10.48550/ARXIV.1609.07093 , arXiv:1609.07093
Brock A, Donahue J, Simonyan K (2018) Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096
Brophy E, Wang Z, Ward TE (2019) Quick and easy time series generation with established image-based gans. https://doi.org/10.48550/ARXIV.1902.05624 , arXiv:1902.05624
Can C (2015) Research of the coordination control of the intersection based on the cooperative vehicle-infrastructure system. MS thesis, Department Transport, Beijing Jiaotong University, Beijing, China
Chen S (2019) Techniques in self-attention generative adversarial networks. https://pub.towardsai.net/techniques-in-self-attention-generative-adversarial-networks-22f735b22dfb
Chen X, Duan Y, Houthooft R, et al (2016) Infogan: interpretable representation learning by information maximizing generative adversarial nets. https://doi.org/10.48550/ARXIV.1606.03657 , arXiv:1606.03657
Chen Y, Wu J, Cui M (2018) Automatic classification and detection of oranges based on computer vision. In: 2018 IEEE 4th international conference on computer and communications (ICCC), pp 1551–1556. https://doi.org/10.1109/CompComm.2018.8780680
Cherednik I, Philipp I (2018) DAHA and plane curve singularities. Algebr Geom Topol 18(1):333–385. https://doi.org/10.2140/agt.2018.18.333
Article MathSciNet Google Scholar
Creswell A, White T, Dumoulin V et al (2018) Generative adversarial networks: an overview. IEEE Signal Process Mag 35(1):53–65
Deng J, Dong W, Socher R et al (2009) Imagenet: a large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition, pp 248–255. https://doi.org/10.1109/CVPR.2009.5206848
Denton E, Chintala S, Szlam A et al (2015) Deep generative image models using a laplacian pyramid of adversarial networks. https://doi.org/10.48550/ARXIV.1506.05751 , arXiv:1506.05751
Dong C, Loy CC, He K et al (2014) Learning a deep convolutional network for image super-resolution. In: European conference on computer vision, Springer, pp 184–199
Dong J, Rafayelyan M, Krzakala F et al (2020) Optical reservoir computing using multiple light scattering for chaotic systems prediction. IEEE J Sel Top Quantum Electron 26(1):1–12. https://doi.org/10.1109/jstqe.2019.2936281
Eskimez SE, Koishida K (2019) Speech super resolution generative adversarial network. In: ICASSP 2019 - 2019 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 3717–3721. https://doi.org/10.1109/ICASSP.2019.8682215
Frid-Adar M, Klang E, Amitai M et al (2018) Synthetic data augmentation using gan for improved liver lesion classification. In: 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018), pp 289–293. https://doi.org/10.1109/ISBI.2018.8363576
Goldberger J, Ben-Reuven E (2016) Training deep neural-networks using a noise adaptation layer. In: International conference on learning representations
Gong M, Xu Y, Li C et al (2019a) Twin auxiliary classifiers gan. arXiv:1907.02690
Gong M, Xu Y, Li C et al (2019) Twin auxiliary classifiers GAN. Adv Neural Inf Process Syst 32:1328–1337
Gong X, Chang S, Jiang Y et al (2019c) Autogan: neural architecture search for generative adversarial networks. https://doi.org/10.48550/ARXIV.1908.03835 , arXiv:1908.03835
Goodfellow I (2017) Nips 2016 tutorial: generative adversarial networks. arXiv:1701.00160
Goodfellow I, Pouget-Abadie J, Mirza M et al (2020) Generative adversarial networks. Commun ACM 63(11):139–144
Goodfellow IJ, Pouget-Abadie J, Mirza M et al (2014) Generative adversarial networks. https://doi.org/10.48550/ARXIV.1406.2661 , arXiv:1406.2661
Greenspan H, van Ginneken B, Summers RM (2016) Guest editorial deep learning in medical imaging: overview and future promise of an exciting new technique. IEEE Trans Med Imaging 35(5):1153–1159. https://doi.org/10.1109/TMI.2016.2553401
Guerriero P, Orcioni S, Matacena I et al (2020) A gan based bidirectional switch for matrix converter applications. In: 2020 international symposium on power electronics, electrical drives, automation and motion (SPEEDAM), pp 375–380. https://doi.org/10.1109/SPEEDAM48782.2020.9161876
Gulrajani I, Ahmed F, Arjovsky M et al (2017) Improved training of wasserstein gans. https://doi.org/10.48550/ARXIV.1704.00028 , arXiv:1704.00028
He D, Chen W, Wang L et al (2014) A game-theoretic machine learning approach for revenue maximization in sponsored search. arXiv:1406.0728
He K, Zhang X, Ren S et al (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778
Hinton GE, Osindero S, Teh YW (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554. https://doi.org/10.1162/neco.2006.18.7.1527
Hitawala S (2018) Comparative study on generative adversarial networks. arXiv:1801.04271
Hou L, Cao Q, Shen H et al (2022) Conditional gans with auxiliary discriminative classifier. arXiv:2107.10060
Hu L, Ou J, Huang J et al (2020) A review of research on traffic conflicts based on intelligent vehicles. IEEE Access 8:24471–24483. https://doi.org/10.1109/ACCESS.2020.2970164
Huang JB, Kang SB, Ahuja N et al (2014) Image completion using planar structure guidance. ACM Trans Graph 33(4):1–10
Hui J (2018) Gan — self-attention generative adversarial networks (SAGAN). https://jonathan-hui.medium.com/gan-self-attention-generative-adversarial-networks-sagan-923fccde790c
Iizuka S, Simo-Serra E, Ishikawa H (2017) Globally and locally consistent image completion. ACM Trans Graph (ToG) 36(4):1–14
Im DJ, Ma H, Taylor G et al (2018) Quantitatively evaluating gans with divergences proposed for training. https://doi.org/10.48550/ARXIV.1803.01045 , arXiv:1803.01045
Isola P, Zhu JY, Zhou T, et al (2016) Image-to-image translation with conditional adversarial networks. https://doi.org/10.48550/ARXIV.1611.07004 , arXiv:1611.07004
Jetchev N, Bergmann U, Vollgraf R (2016) Texture synthesis with spatial generative adversarial networks. https://doi.org/10.48550/ARXIV.1611.08207 , arXiv:1611.08207
Jha D (2018) Not just another GAN paper — SAGAN. https://towardsdatascience.com/not-just-another-gan-paper-sagan-96e649f01a6b
Jiang L, Zhang H, Cai Z (2009) A novel bayes model: hidden naive bayes. IEEE Trans Knowl Data Eng 21(10):1361–1371. https://doi.org/10.1109/TKDE.2008.234
Jin L, Tan F, Jiang S (2020) Generative adversarial network technologies and applications in computer vision. Comput Intell Neurosci 1459:107. https://doi.org/10.1155/2020/1459107
Jindal I, Nokleby M, Chen X (2017) Learning deep networks from noisy labels with dropout regularization. https://doi.org/10.48550/ARXIV.1705.03419 , arXiv:1705.03419
Jolicoeur-Martineau A (2018) The relativistic discriminator: a key element missing from standard gan. https://doi.org/10.48550/ARXIV.1807.00734 , arXiv:1807.00734
Jost Z (2019) Infogan — generative adversarial networks part III. https://towardsdatascience.com/infogan-generative-adversarial-networks-part-iii-380c0c6712cd
Kahembwe E, Ramamoorthy S (2020) Lower dimensional kernels for video discriminators. Neural Netw 132:506–520. https://doi.org/10.1016/j.neunet.2020.09.016
Kaneko T, Ushiku Y, Harada T (2018) Label-noise robust generative adversarial networks. https://doi.org/10.48550/ARXIV.1811.11165 , arXiv:1811.11165
Karnewar A, Wang O (2019) Msg-gan: multi-scale gradients for generative adversarial networks. https://doi.org/10.48550/ARXIV.1903.06048 , arXiv:1903.06048
Karras T, Aila T, Laine S et al (2017) Progressive growing of gans for improved quality, stability, and variation. https://doi.org/10.48550/ARXIV.1710.10196 , arXiv:1710.10196
Karras T, Laine S, Aila T (2021) A style-based generator architecture for generative adversarial networks. IEEE Trans Pattern Anal Mach Intell 43(12):4217–4228. https://doi.org/10.1109/TPAMI.2020.2970919
Kim T, Cha M, Kim H et al (2017) Learning to discover cross-domain relations with generative adversarial networks. https://doi.org/10.48550/ARXIV.1703.05192 , arXiv:1703.05192
Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. https://doi.org/10.48550/ARXIV.1412.6980 , arXiv:1412.6980
Kingma DP, Welling M (2013) Auto-encoding variational bayes. https://doi.org/10.48550/ARXIV.1312.6114 , arXiv:1312.6114
Kodali N, Abernethy J, Hays J et al (2017) On convergence and stability of gans. https://doi.org/10.48550/ARXIV.1705.07215 , arXiv:1705.07215
Kurutach T, Tamar A, Yang G et al (2018) Learning plannable representations with causal infogan. arXiv:1807.09341
Langr J, Bok V (2019) GANs in action. Manning Publications, New York, NY
LeCun Y, Boser B, Denker JS et al (1989) Backpropagation applied to handwritten zip code recognition. Neural Comput 1(4):541–551. https://doi.org/10.1162/neco.1989.1.4.541
LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444. https://doi.org/10.1038/nature14539
Ledig C, Theis L, Huszár F et al (2017) Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4681–4690
Li K, Lee CH (2015) A deep neural network approach to speech bandwidth expansion. In: 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 4395–4399. https://doi.org/10.1109/ICASSP.2015.7178801
Li W, Wang Z, Li J et al (2019) Semi-supervised learning based on generative adversarial network: a comparison between good gan and bad gan approach. arXiv:1905.06484
Lim JH, Ye JC (2017) Geometric gan. https://doi.org/10.48550/ARXIV.1705.02894 , arXiv:1705.02894
Liu S, Sun Y, Zhu D et al (2017) Face aging with contextual generative adversarial nets. In: Proceedings of the 25th ACM international conference on multimedia. Association for computing machinery, New York, NY, USA, MM ’17, p 82–90. https://doi.org/10.1145/3123266.3123431
Long J, Shelhamer E, Darrell T (2014) Fully convolutional networks for semantic segmentation. https://doi.org/10.48550/ARXIV.1411.4038 , arXiv:1411.4038
Lucas A, López-Tapia S, Molina R et al (2019) Generative adversarial networks and perceptual losses for video super-resolution. IEEE Trans Image Process 28(7):3312–3327. https://doi.org/10.1109/TIP.2019.2895768
Lutz S, Amplianitis K, Smolic A (2018) Alphagan: generative adversarial networks for natural image matting. https://doi.org/10.48550/ARXIV.1807.10088 , arXiv:1807.10088
Ma J, Zhou Z, Wang B et al (2019) Hard ship detection via generative adversarial networks. In: 2019 Chinese control and decision conference (CCDC), pp 3961–3965. https://doi.org/10.1109/CCDC.2019.8833176
Mertes S, Schiller D, Lingenfelser F et al (2023) Intercategorical label interpolation for emotional face generation with conditional generative adversarial networks. Communications in computer and information science. Springer Nature Switzerland, Cham, pp 67–87
Mirza M, Osindero S (2014) Conditional generative adversarial nets. https://doi.org/10.48550/ARXIV.1411.1784 , arXiv:1411.1784
Miyato T, Koyama M (2018) cGANs with projection discriminator. https://doi.org/10.48550/ARXIV.1802.05637 , arXiv:1802.05637
Mu F (2019) Wasserstein-BiGAN: wasserstein BiGAN (bidirectional GAN trained using wasserstein distance). https://github.com/fmu2/Wasserstein-BiGAN
Odena A (2016) Semi-supervised learning with generative adversarial networks. https://doi.org/10.48550/ARXIV.1606.01583 , arXiv:1606.01583
Odena A, Olah C, Shlens J (2016) Conditional image synthesis with auxiliary classifier gans. https://doi.org/10.48550/ARXIV.1610.09585 , arXiv:1610.09585
Oliver A (2018) InfoGAN · depth first learning. https://www.depthfirstlearning.com/2018/InfoGAN
Pan Z, Yu W, Yi X et al (2019) Recent progress on generative adversarial networks (gans): a survey. IEEE Access 7:36322–36333. https://doi.org/10.1109/ACCESS.2019.2905015
Patrini G, Rozza A, Menon A et al (2016) Making deep neural networks robust to label noise: a loss correction approach. https://doi.org/10.48550/ARXIV.1609.03683 , arXiv:1609.03683
Pu Y, Gan Z, Henao R et al (2016) Variational autoencoder for deep learning of images, labels and captions. In: Lee D, Sugiyama M, Luxburg U et al (eds) Advances in neural information processing systems, vol 29. New York, Curran Associates Inc.
Rabiner L (1989) A tutorial on hidden markov models and selected applications in speech recognition. In: Proceedings IEEE, 77(2):257–286. https://doi.org/10.1109/5.18626
Radford A, Metz L, Chintala S (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. https://doi.org/10.48550/ARXIV.1511.06434 , arXiv:1511.06434
Reed S, Akata Z, Yan X et al (2016) Generative adversarial text to image synthesis. In: International conference on machine learning, PMLR, pp 1060–1069
Ren S, He K, Girshick R et al (2017) Faster r-cnn: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39(6):1137–1149. https://doi.org/10.1109/TPAMI.2016.2577031
Ren S, He K, Girshick R et al (2017) Object detection networks on convolutional feature maps. IEEE Trans Pattern Anal Mach Intell 39(7):1476–1481. https://doi.org/10.1109/TPAMI.2016.2601099
Salakhutdinov R, Larochelle H (2010) Efficient learning of deep Boltzmann machines. In: Teh YW, Titterington M (eds.), proceedings of the thirteenth international conference on artificial intelligence and statistics, proceedings of machine learning research, vol. 9. PMLR, Chia Laguna Resort, Sardinia, Italy, pp 693–700. https://proceedings.mlr.press/v9/salakhutdinov10a.html
Salimans T, Zhang H, Radford A et al (2018) Improving gans using optimal transport. https://doi.org/10.48550/ARXIV.1803.05573 , arXiv:1803.05573
Shah H (2018) Using bidirectional generative adversarial networks to estimate value-at-risk for market risk. https://towardsdatascience.com/using-bidirectional-generative-adversarial-networks-to-estimate-value-at-risk-for-market-risk-c3dffbbde8dd
Shelhamer E, Long J, Darrell T (2017) Fully convolutional networks for semantic segmentation. IEEE Trans Pattern Anal Mach Intell 39(4):640–651. https://doi.org/10.1109/TPAMI.2016.2572683
Shi T, Yuan Y, Fan C et al (2019) Face-to-parameter translation for game character auto-creation. In: 2019 IEEE/CVF international conference on computer vision (ICCV), pp 161–170. https://doi.org/10.1109/ICCV.2019.00025
Shi W, Caballero J, Huszár F et al (2016) Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp 1874–1883. https://doi.org/10.1109/CVPR.2016.207
Sukhbaatar S, Bruna J, Paluri M et al (2014) Training convolutional networks with noisy labels. https://doi.org/10.48550/ARXIV.1406.2080 , arXiv:1406.2080
takuhirok (2021) Github - takuhirok/rGAN: rgan: label-noise robust generative adversarial networks. https://github.com/takuhirok/rGAN
Tran D, Ranganath R, Blei DM (2017) Hierarchical implicit models and likelihood-free variational inference. https://doi.org/10.48550/ARXIV.1702.08896 , arXiv:1702.08896
Tulyakov S, Liu MY, Yang X et al (2017) Mocogan: decomposing motion and content for video generation. https://doi.org/10.48550/ARXIV.1707.04993 , arXiv:1707.04993
Vaidya K (2021) Implementation of semi-supervised generative adversarial networks in Keras. https://towardsdatascience.com/implementation-of-semi-supervised-generative-adversarial-networks-in-keras-195a1b2c3ea6
Volkhonskiy D, Nazarov I, Borisenko B et al (2017) Steganographic generative adversarial networks. In: Proceedings of NIPS 2016 workshop on adversarial training
Wang X, Girshick R, Gupta A et al (2017) Non-local neural networks. https://doi.org/10.48550/ARXIV.1711.07971 , arXiv:1711.07971
Wang Z, She Q, Ward TE (2021) Generative adversarial networks in computer vision: a survey and taxonomy. ACM Comput Surv. https://doi.org/10.1145/3439723
Weng Y, Zhou H (2019) Data augmentation computing model based on generative adversarial network. IEEE Access 7:64223–64233. https://doi.org/10.1109/ACCESS.2019.2917207
Wissen D (2022) GAN and its applications: everything you need to know - daten & wissen. https://datenwissen.com/blog/gan-applications/ . Accessed 24 Oct 2023
Xu N, Price B, Cohen S et al (2017) Deep image matting. arXiv:1703.03872
Yang J, Price B, Cohen S et al (2016) Object contour detection with a fully convolutional encoder-decoder network. https://doi.org/10.48550/ARXIV.1603.04530 , arXiv:1603.04530
Yang TY (2020) Introduction to GANs. https://towardsdatascience.com/introduction-to-gans-877dd689cac1
Yi Z, Zhang H, Tan P et al (2017) Dualgan: unsupervised dual learning for image-to-image translation. https://doi.org/10.48550/ARXIV.1704.02510 , arXiv:1704.02510
Zhang H, Xu T, Li H et al (2017) Stackgan: text to photo-realistic image synthesis with stacked generative adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 5908–5916
Zhang H, Goodfellow I, Metaxas D et al (2018a) Self-attention generative adversarial networks. https://doi.org/10.48550/ARXIV.1805.08318 , arXiv:1805.08318
Zhang R, Isola P, Efros AA et al (2018b) The unreasonable effectiveness of deep features as a perceptual metric. In: 2018 IEEE/CVF conference on computer vision and pattern recognition, pp 586–595. https://doi.org/10.1109/CVPR.2018.00068
Zhu JY, Park T, Isola P et al (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. https://doi.org/10.48550/ARXIV.1703.10593 , arXiv:1703.10593
Download references
This research received no external funding.
Kirtirajsinh Zala, Deep Thumar, Hiren Kumar Thakkar, Urva Maheshwari and Biswaranjan Acharya have contributed equally to this work.
Department of Information Technology, Marwadi University, Rajkot, Gujarat, 360006, India
Kirtirajsinh Zala
Faculty of Engineering, Marwadi Education Foundations, Rajkot, Gujarat, 360006, India
Deep Thumar & Urva Maheshwari
Department of Computer Science and Engineering, School of Technology, Pandit Deendayal Energy University, Gandhinagar, Gujarat, 382007, India
Hiren Kumar Thakkar
Department of Computer Engineering- AI and BDA, Marwadi University, Rajkot, Gujarat, 360006, India
Biswaranjan Acharya
You can also search for this author in PubMed Google Scholar
All authors have diligently reviewed and contributed equally to this study.
Correspondence to Hiren Kumar Thakkar .
Conflict of interest.
The authors have no Conflict of interest to declare that are relevant to the content of this article.
Informed consent procedures were Not applicable to this research.
This research did not involve any human participants or animals.
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Reprints and permissions
Zala, K., Thumar, D., Thakkar, H.K. et al. A survey and identification of generative adversarial network technology-based architectural variants and applications in computer vision. Int J Syst Assur Eng Manag (2024). https://doi.org/10.1007/s13198-024-02478-6
Download citation
Received : 06 February 2024
Revised : 07 July 2024
Accepted : 09 August 2024
Published : 14 August 2024
DOI : https://doi.org/10.1007/s13198-024-02478-6
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
IMAGES
COMMENTS
Data Science Masters Theses. The Master of Science in Data Science program requires the successful completion of 12 courses to obtain a degree. These requirements cover six core courses, a leadership or project management course, two required courses corresponding to a declared specialization, two electives, and a capstone project or thesis.
The Master of Science in Engineering Data Science at the University of Houston is a 10 course graduate curriculum program with both non-thesis and thesis options. A four-year bachelor's degree in engineering or engineering related fields, or computer science and data science and statistics is required in order to apply for the Engineering Data ...
Discover the top 20 data engineering project ideas of 2024 suitable for all skill levels. Explore with source code to enhance your expertise.
John A Paulson School of Engineering and Applied Sciences This thesis guide has been put together to help guide students who are writing or interested in writing a thesis in engineering.
The task in this thesis is to give a detailed overview of physics-informed neural network architec-tures, and to compare several on a standard data set from fluid dynamics. A particular focus will be set on the task to reconstruct complex physical data from a small number of realistically measurable data points.
In this article, I briefly introduced the big data research issues in general and listed Top 20 latest research problems in big data and data science in 2020. These problems are further divided and presented in 5 categories so that the researchers can pick up the problem based on their interests and skill set.
Best Master thesis topic in the field of BI,Data engineering? I am a master student , doing my masters in Software Engineering. I have a interest towards data, my question here would be what would ...
Master thesis in Data Engineering I am a MSc student in Data Science and Engineering and I am in the first semester of the 2nd year. In some days I will have a meeting with a really promising company and most likely I will have to propose a good master thesis topic in order to get an internship and the thesis possibility.
The curriculum of the Data Engineering program provides students with a comprehensive understanding of the big data aspects of data analytics and data science, with the technological challenges of data acquisition, curation, and management. The program focuses on the application of computer skills and mathematical knowledge to solve real-world ...
Geospatial data analysis is a multifaceted discipline encompassing the collection, processing, and visualization of diverse datasets. It models and delineates the interactions of people, objects, a...
The thesis preparation in the Department of Informatics Universitas Ahmad Dahlan is divided into two areas of interest, namely Intelligent Systems and Software and Data Engineering.
MIT's DSpace contains more than 58,000 theses completed at MIT dating as far back as the mid 1800's. Theses in this collection have been scanned by the MIT Libraries or submitted in electronic format by thesis authors. Since 2004 all new Masters and Ph.D. theses are scanned and added to this collection after degrees are awarded.
Master's thesis finished - Thank you. Personal Project Showcase. Hi everyone! A few months ago I defended my Master Thesis on Big Data and got the maximum grade of 10.0 with honors. I want to thank this subreddit for the help and advice received in one of my previous posts. Also, if you want to build something similar and you think the project ...
The Master of Science in Data Analytics Engineering program helps students acquire knowledge and skills to: Discover opportunities to improve products, processes, systems, and enterprises through data analytics. Apply optimization, statistical, and machine-learning methods to solve complex problems involving large data from multiple sources.
Hello friends, for my thesis I need to do research on what the most common factors are the cause a datawarehouse project to fail. Is there anybody…
The MS in Data Analytics Engineering is available at Northeastern's campus in Boston, MA as well as at our campuses in Seattle, WA and Vancouver, Canada. The research project/thesis course is a requirement for students taking the DAE program in Vancouver. Note: A subset of program courses are available at campuses outside of Boston.
The idea is to expose building blocks of database systems, for example hash tables, for data mining and machine learning algorithms. The thesis should build upon an existing open-source database system, for example Hyrise, where you should first write selected algorithms in SQL. Afterwards, you should improve the performance of the algorithms ...
Process data analytics is the application of statistics and related mathematical tools to data in order to understand, develop, and improve manufacturing processes. There have been growing opportunities in process data analytics because of advances in machine learning and technologies for data collection and storage.
Bachelor and Master Thesis. We offer a variety of cutting-edge and exciting research topics for Bachelor's and Master's theses. We cover a wide range of topics from Data Science, Natural Language Processing, Argument Mining, the Use of AI in Business, Ethics in AI and Multimodal AI. We are always open to suggestions for your own topics, so ...
The MS in Data Science and Engineering program offers three options, each requiring a total of 30 credits. The minimum requirements for all options are. 12 credits from the core; 1 credit Graduate Seminar CSE 792; Research experience (CSE 797 for Thesis Option or 795 for Project Option or 794 for Course-Only Option); and.
The Master of Science program in Data Engineering allows you to focus on your analytical, programming and engineering skills to: Creatively solve data-related analytical problems. Integrate messy data into clean, usable data sets. Organize and retrieve large data efficiently. Core courses give students a solid foundation in data engineering and ...
Artificial Intelligence and Decision-making combines intellectual traditions from across computer science and electrical engineering to develop techniques for the analysis and synthesis of systems that interact with an external world via perception, communication, and action; while also learning, making decisions and adapting to a changing environment.
Theses Writing a thesis is the final step in obtaining a Bachelor or Master degree. A thesis is always coupled to a scientific project in some field of expertise. Candidates who want to write their thesis in the Big Data Analytics group should, therefore, be interested and trained in a field related to our research areas.
Data engineering—which includes not only data engineers by title but also their counterparts in adjacent fields such as database administration, management, architecture, and analysis—will ensure that AI initiatives are kept alive, well, and thriving. Accordingly, data engineers have risen to become the new stars in the AI-driven organization.
Advanced research and scholarship. Theses and dissertations, free to find, free to use.
Super Micro Computer shares were trading at $69 when this thesis was published, vs. closing price of $624.65 on Aug 2. ... A computer network engineering team setting up a server array in a data ...
The 30-credit Duke Master of Science in Electrical & Computer Engineering degree provides a unique combination of opportunities: World-class research Integrated into a project-based learning environment; Flexible, individualized curriculum You choose: Thesis, Project or Coursework-only options; Professional development opportunities
The surging popularity of generative adversarial networks (GANs) has ignited a wave of innovation in the realm of computer vision, a highly explored subfield of deep learning. GANs are revolutionizing the area of machine learning because they use a game-based training technique. This is in contrast to traditional approaches to machine learning, which center on feature learning and picture ...