Common App announces 2024–2025 Common App essay prompts

  • Facebook icon
  • Twitter icon
  • Linkedin icon

female student in classroom

News and updates

Blog

Common App welcomes four new members to the 2024-2025 Board of Directors

Blog

NC Central University sees exponential growth in applications

Blog

Common App’s Student Advisory Committee

The Magoosh logo is the word Magoosh spelled with each letter o replaced with a check mark in a circle.

The 2021-2022 Common App Essay: How to Write a Great Essay That Will Get You Accepted

Common App essay - magoosh

If you’re reading this, then you’ve probably started the very exciting process of applying to college—and chances are you may be a little overwhelmed at times. That’s OK! The key to getting into the right college for you is taking each step of the application process in stride, and one of those steps is completing the Common App and the Common App essay.

In this post, you’ll learn what the Common Application essay is, how to write one (including a free checklist to help you with the process), example essays, and much more. Let’s get started!

Table of Contents

What is the Common App, and More Importantly, What is the Common App Essay? Quick Facts on the 2021-2022 Common App Essay How Do You Write a Common App Essay?

What Should I Avoid in My Common App Essay? What Are Some Good Common App Essay Examples?

Common Application Essay FAQs

What is the common app, and more importantly, what is the common app essay.

What is the Common App essay - magoosh

The “Common App,” short for the Common Application , is a general application used to apply to multiple college undergraduate programs at once. It’s accepted by hundreds of colleges in the United States as well as some colleges internationally.

The idea is that the Common App is a “one-stop shop” so you don’t have to complete a million separate applications. That said, plenty of colleges still require their own application components, and the Common App, as user-friendly as it aims to be, can still feel like a bit of a challenge to complete.

Part of the reason the Common App can seem intimidating is because of the Common App essay component, which is required of all students who submit a college application this way. But never fear! In reality, the Common App essay is easy to ace if you know how to approach it and you give it your best.

So without further ado, let’s take a look at anything and everything you need to know about the 2021-2022 Common App essay in order to help you get into the school of your dreams. We’ve also created a downloadable quick guide to writing a great Common Application essay.

Button to download 2021-2022 Common App Essay

Quick Facts on the 2021-2022 Common App Essay

Common App essay facts - magoosh

Below are just a few of the short and sweet things you need to know about the 2021-2022 Common App essay, but we’ll elaborate on some of this content later in this post.

How Do You Write a Common App Essay?

How to write a Common App essay - magoosh

The million dollar question about the Common App essay is obviously, “How do I actually write it?!”

Now there’s something to keep in mind before exploring how to compose the Common App essay, and that’s the purpose of this task. You may be wondering:

  • What are college admissions boards actually looking for?
  • Why are you being asked to write this essay?

College admissions boards want to see that you can compose a compelling, well-crafted essay. After four years of high school, you’re expected to be able to craft a clear and concise piece of writing that addresses a specific subject.

So yes, you’re actually being evaluated on your essay writing skills, but the purpose of the Common Application essay is deeper than that—it’s to present the type of person and thinker that you are.

Regardless of which prompt you choose, colleges are trying to get a sense of how thoughtfully and critically you can reflect on your life and the world around you .

And furthermore, they want to get a sense of who you are—your interests, your personality, your values—the dimensional aspects of you as an applicant that simply can’t be expressed in transcripts and test scores . In short, you want to stand out and be memorable.

That said, there is no exact formula for “cracking the case” of the Common App essay, but there are plenty of useful steps and tips that can help you write a great essay.

(In a hurry? Download our quick and concise handout that sums up some of the keys to the Common App essay!)

1) Familiarize Yourself With the Common App Prompts and How to Approach Them

The Common App recently released the 2021-2022 essay prompts , which are almost the same as last year’s prompts, but with one BIG difference.

The prompt about problem solving (formerly prompt #4) has been replaced with a prompt about gratitude and how it has motivated you. According to Common App President and CEO Jenny Rickard, this change was inspired by new scientific research on the benefits of writing about gratitude and the positive impact others have had on our lives.

Additionally, the Common App now includes an optional Covid-19 prompt where you can discuss how you’ve personally been affected by the Covid-19 pandemic.

Now, let’s take a look at each 2021-2022 Common App prompt individually. You’ll notice that every prompt really has two parts to it:

  • share, explain and describe a narrative, and
  • reflect on, analyze, and draw meaning from it.

Let’s take a look.

  Prompt #1: A snapshot of your story

Prompt: Some students have a background, identity, interest, or talent that is so meaningful they believe their application would be incomplete without it. If this sounds like you, then please share your story.

  • Discuss a background, identity, or interest that you feel is meaningful to who you are and/or that or sets you apart from others.
  • Reflect on why this attribute is meaningful and how it has shaped you as a person.

  Prompt #2: An obstacle you overcame

Prompt: The lessons we take from obstacles we encounter can be fundamental to later success. Recount a time when you faced a challenge, setback, or failure. How did it affect you, and what did you learn from the experience?

  • Recount a time you faced a challenge, setback, or failure.
  • Reflect on how this affected you, what you learned from it, and if it led to any successes later down the line.

  Prompt #3: A belief or idea you questioned or challenged

Prompt: Reflect on a time when you questioned or challenged a belief or idea. What prompted your thinking? What was the outcome?

  • Explain a time that you questioned a particular belief or way of thinking.
  • Elaborate on what prompted this questioning, what the outcome was, and why this outcome was significant.

  Prompt #4: An experience of gratitude that has motivated you

Prompt: Reflect on something that someone has done for you that has made you happy or thankful in a surprising way. How has this gratitude affected or motivated you?

  • Describe the specific experience or interaction that made you feel a sense of gratitude. Make sure to explain who did something nice for you and why it was surprising or unexpected.
  • Explain, as specifically as possible, how this feeling of gratitude changed or motivated you. What actions did you take a result? How did your mindset change?

  Prompt #5: An accomplishment or event that sparked personal growth

Prompt: Discuss an accomplishment, event, or realization that sparked a period of personal growth and a new understanding of yourself or others.

  • Describe an accomplishment or event that sparked personal growth for you.
  • Reflect on the nature of this growth and/or a new understanding you gained in the process.

  Prompt #6: An interest so engaging you lose track of time

Prompt: Describe a topic, idea, or concept you find so engaging that it makes you lose all track of time. Why does it captivate you? What or who do you turn to when you want to learn more?

  • Discuss a topic, idea, or interest that is so engaging to you that you lose track of time when focused on it.
  • Reflect on and explain why this interest is so important to you, and your method of learning more about it.

  Prompt #7: An essay topic of your choice

Prompt: Share an essay on any topic of your choice. It can be one you’ve already written, one that responds to a different prompt, or one of your own design.

  • Discuss any subject matter or philosophical question of interest to you.
  • Reflect on the implications of this subject or question, and how it has shaped you, transformed you, impacted your life, etc.

  Now keep in mind that to some degree, it doesn’t actually matter which prompt you choose to answer, so long as you write and present yourself well. But you obviously want to pick whichever Common App essay prompt speaks to you most, and the one you think will provide you the meatiest and most meaningful material.

This is an outstanding guide to choosing the right Common App essay prompt, but as a rule of thumb, the “right” prompt will probably stand out to you. If you have to rack your brain, for example, to think of a challenge you’ve overcome and how the experience has shaped you, then that prompt probably isn’t the right one.

Authenticity is key, so choose the prompt you can answer thoroughly.

2) Brainstorm

Whether you know immediately which prompt you’re going to choose or not, do yourself a huge favor and brainstorm . Take out a notebook and jot down or free write all of the ideas that spring to your mind for as many of the prompts that you’re considering. You might be surprised what ideas you generate as you start doing this, and you might be surprised which ideas seem to have the most content and examples to elaborate on.

Also, it’s important to note that your subject matter doesn’t have to be highly dramatic or spectacular. You don’t have to recount a near-death experience, an epic overseas adventure, a 180-degree turn of faith, etc. Your ordinary life, when reflected upon thoughtfully, is interesting and profound.

3) Answer the Question (and Stay on Topic!)

This may sound painfully obvious, but for some of us, it can be hard to stay on topic. Each prompt is posed as a question , so don’t lose sight of that and let your essay devolve into a story about yourself that never really gets at the heart of the prompt.

As you’re drafting your essay—say after each paragraph—pause and refer back to the question, making sure each paragraph plays some part in actually responding to the prompt.

4) Structure and Organize Your Essay Effectively

The Common App essay isn’t like many of the other argumentative essays you’ve been taught to write in school. It is argumentative in that you are essentially arguing for why you are a good candidate for a particular college, using your personal experience as support, but it’s more than that.

The Common Application essay is essentially a narrative essay that is reflective and analytical by nature. This means that regardless of which prompt you select, you’ll be sharing something personal about yourself, and then reflecting on and analyzing why what you shared is important.

And even if this isn’t an essay format that you’re accustomed to writing, you can still rely on your knowledge of basic essay structures to help you. You’ll still need a clear introduction, body, and conclusion.

Let’s talk about those three pieces now.

Introduction

The purpose of an introduction is 1) to grab the reader’s attention and compel them to continue reading, and 2) to introduce the reader to the general subject at hand.

So the most important part of the introduction is a unique attention-getter that establishes your personal voice and tone while piquing the reader’s interest. An example of a good hook could be a brief illustrative anecdote, a quote, a rhetorical question, and so on.

Now, you may be wondering, “Do I need a thesis statement?” This is a great question and the simple answer is no.

This is because some students prefer to hook their reader with a bit of mystery and let their story unfold organically without a thesis sentence “spoiling” what is to come. This doesn’t mean you can’t have a thesis sentence, it just means you don’t need one. It just depends on how you want to build your personal narrative, and what serves you best.

That said, your essay does need a greater message or lesson in it, which is another way of saying a thesis . You just don’t necessarily have to write it out in the introduction paragraph.

It might help you to keep a thesis in mind or even write it down just for your own sake, even if you don’t explicitly use it in your introduction. Doing so can help you stay on track and help you build up to a stronger reflection.

Here are some examples of narrative thesis statements:

  • I moved a lot as a child on account of having a parent in the military, which led me to become highly adaptable to change.
  • The greatest obstacle I’ve overcome is my battle with leukemia, which has taught me both incredible resilience and reverence for the present.
  • An accomplishment that I achieved was making the varsity volleyball team, which has made me grow tremendously as a person, specifically in the areas of self-confidence and collaboration.

As discussed earlier, there are two parts to each prompt: explanation and reflection . Each part should be addressed throughout the essay, but how you organize your content is up to you.

A good rule of thumb for structuring the body of your essay is as follows:

  • Situate your reader: provide context for your story by focusing in on a particular setting, subject matter, or set of details. For example, you may frame an essay about an internship at the zoo with the phrase, “Elephants make the best friends.” Your reader knows immediately that the subject matter involves your interaction with animals, specifically elephants.
  • Explain more about your topic and how it affected you, using specific examples and key details.
  • Go deeper. Elaborate and reflect on the message at hand and how this particular topic shaped the person you are today.

Note that while there are no set rules for how many paragraphs you should use for your essay, be mindful of breaking paragraphs whenever you naturally shift gears, and be mindful of too-long paragraphs that just feel like walls of text for the reader.

Your conclusion should flow nicely from your elaboration, really driving home your message or what you learned. Be careful not to just dead-end your essay abruptly.

This is a great place to speculate on how you see the subject matter informing your future, especially as a college student and beyond. For example, what might you want to continue to learn about? What problems do you anticipate being able to solve given your experience?

5) Write Honestly, Specifically, and Vividly

It may go without saying, but tell your own story, without borrowing from someone else’s or embellishing. Profound reflection, insight, and wisdom can be gleaned from the seemingly simplest experiences, so don’t feel the need to stray from the truth of your unique personal experiences.

Also, make sure to laser in on a highly specific event, obstacle, interest, etc. It is better to go “narrower and deeper” than to go “wider and shallower,” because the more specific you are, the more vivid and engrossing your essay will naturally be.

For example, if you were a camp counselor every summer for the last few years, avoid sharing several summers’ worth of content in your essay. Focus instead on one summer , and even better, on one incident during that summer at camp.

And on that note, remember to be vivid! Follow the cardinal rule of writing: show and don’t tell . Provide specific details, examples, and images in order to create a clear and captivating narrative for your readers.

6) Be Mindful of Voice and Tone

Unlike in most academic essays, you can sound a bit less stuffy and a bit more like yourself in the Common App essay. Your essay should be professional, but can be conversational. Try reading it aloud; does it sound like you? That’s good!

Be mindful, however, of not getting too casual or colloquial in it. This means avoiding slang, contractions, or “text speak” abbreviations (e.g. “lol”), at least without deliberate context in your story (for example, if you’re recounting dialogue).

You’re still appealing to academic institutions here, so avoid profanity at all costs, and make sure you’re still upholding all the rules for proper style, grammar, and punctuation.

7) Revise and Proofread

This one is a biggie. Give yourself time during your application process to revise, rework, and even rewrite your essay several times. Let it grow and change and become the best version it can be. After you write your first draft, walk away from it for a couple days, and return to it with fresh eyes. You may be surprised by what you feel like adding, removing, or changing.

And of course, make sure your essay is pristine before you submit it. Triple and quadruple check for spelling and usage errors, typos, etc. Since this isn’t a timed essay you have to sit for (like the ACT essay test , for example), the college admissions readers will expect your essay to be polished and sparkling.

A tried and true method for both ensuring flow and catching errors is reading your essay aloud. You may sound a little silly, but it really works!

What Should I Avoid in My Common App Essay?

What to avoid in Common App essay - magoosh

Resume Material

Your Common App essay is your chance to provide a deeper insight into you as a person, so avoid just repeating what you’d put on a resume. This is not to say you can’t discuss something mentioned briefly on your resume in greater depth, but the best essays offer something new that helps round out the whole college application.

Controversy

Okay, now this one is a bit tricky. On the one hand, you should write boldly and honestly, and some of the prompts (the one about challenging a particular belief, for example) are appropriate for addressing potentially contentious topics.

But that said, avoid being controversial or edgy for the sake of being controversial or edgy. Be steadfast in your beliefs for the greater sake of the narrative and your essay will be naturally compelling without being alienating to your readers.

Vague Stories

If you have a personal story that you’re not entirely comfortable sharing, avoid it, even if it would make a great essay topic in theory. This is because if you’re not comfortable writing on the subject matter, you’ll end up being too vague, which won’t do your story or overall application justice. So choose a subject matter you’re familiar with and comfortable discussing in specifics.

Unless they really, truly serve your essay, avoid general platitudes and cliches in your language. It is definitely encouraged to have an essay with a moral, lesson, or greater takeaway, but try to avoid summing up what you’ve learned with reductive phrases like “slow and steady wins the race,” “good things come in small packages,” “actions speak louder than words,” “you can’t judge a book by its cover,” and so on.

What Are Some Good Common App Essay Examples?

Common App essay examples - magoosh

There are tons of Common App essays out there, including these Common App essay examples accepted at Connecticut College, which include explanations from admissions readers about why they were chosen.

But let’s take a look here at two versions of an example essay, one that is just okay and one that is great.

Both Common App essay examples are crafted in response to prompt #2, which is:

Essay Version #1, Satisfactory Essay:

During my sophomore year of high school, I tore my ACL, which stands for “anterior cruciate ligament,” and is the kiss of death for most athletic careers. This injury ended up being one of the greatest obstacles of my life. It was also, however, a turning point that taught me to see opportunity amidst adversity.

It was particularly awful that I was just about to score a winning goal during a championship hockey game when I was checked by a guy on the opposing team and came crashing down on my knee. It was pain unlike anything I’d ever felt before, and I knew immediately that this was going to be bad.

For the few months that followed the accident, I was lost, not really knowing what to do with myself. I didn’t know who I was anymore because hockey had been my whole world and sense of identity. Between working out, attending practice, playing home and away games, and watching games to learn more, it was my lifeblood. Losing my ability to play took a toll on me physically and emotionally and I grew lethargic and depressed.

And then one day I heard my school would be adding an advanced multimedia art class for those students who wanted to continue studying art beyond what was already offered. I had taken the handful of art classes my school offered and really enjoyed and excelled at them—though I had never considered them more than just fun electives to fill my scheduled, as required.

After a couple of weeks of the class, I began feeling better. Suddenly I wanted to draw or paint everything I looked at. I wanted to share the world around me as I saw it with others, to connect with people in a way I’d never done before. I met and made friends with many new people in that art class, people I would have never known if I hadn’t taken it, which also opened me up to all kinds of new mindsets and experiences.

We’re all familiar with the common adage, “When one door closes, another opens,” and this is exactly what happened for me. I might never have pursued art more seriously if I hadn’t been taken out of hockey. This has served as a great reminder for me to stay open to new opportunities. We never know what will unexpectedly bring us joy and make us more well-rounded people.

Areas for Improvement in Version #1:

  • It lacks a compelling hook.
  • The discussion of the obstacle and reflection upon it are both a bit rushed.
  • It could use more vivid and evocative language.
  • It uses a cliche (“one door closes”).
  • It is somewhat vague at times (e.g. what kinds of “new mindsets and experiences” did the writer experience? In what ways are they now more “well-rounded”?).

Now let’s apply this feedback and revise the essay.

Essay Version #2, Excellent Essay:

My body was splayed out on the ice and I was simultaneously right there, in searing pain, and watching everything from above, outside of myself. It wasn’t actually a “near death” experience, but it was certainly disorienting, considering that just seconds before, I was flying down the ice in possession of the puck, about to score the winning goal of our championship game.

Instead, I had taken a check from an opposing team member, and had torn my ACL (or anterior cruciate ligament), which is the kiss of death for most athletic careers.

My road to recovery included two major surgeries, a couple months on crutches, a year of physical therapy, and absolutely zero athletic activity. I would heal, thankfully, and regain movement in my knee and leg, but I was told by doctors that I may never play hockey again, which was devastating to me. Hockey wasn’t just my passion—it was my life’s goal to play professionally.

For the few months that followed the accident, I was lost, feeling like a ghost haunting my own life, watching everything but unable to participate. I didn’t know who I was anymore because hockey had been my whole world and sense of identity. Between working out, attending practice, playing home and away games, and watching games to learn more, it was my lifeblood. Losing my ability to play took a toll on me physically and emotionally, and I grew lethargic and depressed.

And then one day I heard my school would be adding an advanced multimedia art class after school for those students who wanted to study art more seriously. I had already taken the handful of art classes my school offered and really enjoyed them—though I had never considered them more than just fun electives to fill my schedule, as required. And, because of hockey, I certainly had never had afternoons open.

After a couple of weeks of the class, I began to feel alive again, like “myself” but renewed, more awake and aware of everything around me. Suddenly I wanted to draw or paint everything I looked at, to bring everything I saw to life. It wasn’t just that I’d adopted a new hobby or passion, it was that I began looking more closely and critically at the world around me. I wanted to share what I saw with others, to connect with people in a way I’d never done before.

My art teacher selected a charcoal portrait of mine to be showcased in a local art show and I’ve never been more proud of myself for anything. Many of my friends, family members, and teammates came to see the show, which blew me away, but also I realized then just how much of my own self worth had been attached to people’s perception of me as a successful athlete. I learned how much better it feels to gain self worth from within. Unlike hockey, which I’d trained to be good at since I was a toddler, art is something that made me much more vulnerable. I didn’t do it to try to be the best, I did it because it felt good. And getting out of my comfort zone in this way gave me a sense of confidence I had never known prior, despite all my time on the ice during high-stakes games.

Today, I’m back in skates and able to play hockey, but will probably not play professionally; while I am disappointed, I’m also at peace with it. We make plans in life, and sometimes life has other plans for us that we have to adapt to and embrace, which is the more profound lesson I’ve learned in the healing process. We can crumple in the face of obstacles, or we can look for a silver lining and allow ourselves to grow into more complex, dynamic, well-rounded people. I don’t know what the rest of life holds for me, but I do know that I’m going to keep making art, and I’m going to keep opening myself up to new opportunities and experiences.

Strengths of Version #2:

  • It has a compelling hook that draws the reader in.
  • It has a clear beginning, middle, and end (expressed as an introduction, body, and conclusion).
  • It directly addresses the prompt at hand and sticks to it.
  • It focuses on one specific incident.
  • It is well balanced in its explanation of and reflection on a given experience.
  • It uses a clear, unique voice and tone as well as vivid, evocative language.
  • It has a logical and cohesive flow.
  • It is highly personal while also polished and professional.

Hopefully these examples have given you ideas of how you can take your Common App essay from good to great. If you have more questions about how to write a Common App essay, keep reading our FAQs below.

Common App essay FAQs - magoosh

How much do I actually have to write for the Common App essay?

Last year, the Common App essay was capped at 650 words with a minimum of 250 words required. The best essays tend to range between 500-650 words.

Think of it this way as you start to draft: 500 words is one single-spaced page (250 words is one double-spaced page), so you should write roughly a page to page and half of typed, single-spaced content.

Where can I find the official Common App essay prompts?

Here are the 2021-2022 Common App essay prompts , which are the same as last year’s, with the exception of a new prompt #4 and the addition of a Covid-19 Common App prompt .

Do I need a title for the Common App essay?

A title is not required for the Common App essay, but you are, of course, more than welcome to include one if you’d like.

Where can I go for more information about the Common App essay?

All of the necessary information for the Common App and the Common App essay can be found on the Common Application home page.

For further reading, here are some posts that tackle and dispel common myths about the Common App essay:

Myth: The Common App essay must sound professional. Myth: Colleges can’t tell if someone helps write a common app essay.

If you haven’t already, you can download our free Common App essay checklist .

Happy Writing!

There you have it! The Common App essay can actually be quite rewarding to write if you give yourself enough time to prepare for it thoroughly. Remember, it’s all about you, and you’re the authority on that! So hunker down and don’t forget to have fun in the writing process.

We’d also love to hear from you! What questions or concerns do you still have about the Common Application essay? What are you thinking about writing on?

Comment below, and good luck!

Nadyja Von Ebers

Nadyja von Ebers is one of Magoosh’s Content Creators. Nadyja holds an MA in English from DePaul University and has taught English and at the high school and college levels for twelve years. She has a decade of experience teaching preparation for the AP exams, the SAT, and the ACT, among other tests. Additionally, Nadyja has worked as an academic advisor at college level and considers herself an expert in all things related to college-prep. She’s applied her college expertise to posts such as UCLA Admissions: The SAT Scores, ACT Scores, and GPA You Need to Get in and A Family Guide to College Admissions . Nadyja loves helping students reach their maximum potential and thrives in both literal and virtual classrooms. When she’s not teaching, she enjoys reading and writing for pleasure and loves spending time in or near the ocean. You can connect with her on LinkedIn !

View all posts

More from Magoosh

7 College Essay Topics to Avoid Writing About

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

common app essay prompts for 2021

2021-2022 Common Application Writing Prompts

The Common App essay prompts will remain the same for 2021-2022 with one exception: there will no longer be a prompt on “solving a problem.” It will be replaced by the following:

“Reflect on something that someone has done for you that has made you happy or thankful in a surprising way. How has this gratitude affected or motivated you?”

This is the first change to the writing prompts in several years and the new prompt seems to be a nod to one’s humanity — specifically to the themes of gratitude and selflessness.

Also of note: the optional COVID-19 question within the “Additional Information” section will be retained and should be reviewed.

Below is the full set of essay prompts for 2021-2022.

  • Some students have a background, identity, interest, or talent that is so meaningful they believe their application would be incomplete without it. If this sounds like you, then please share your story.
  • The lessons we take from obstacles we encounter can be fundamental to later success. Recount a time when you faced a challenge, setback, or failure. How did it affect you, and what did you learn from the experience?
  • Reflect on a time when you questioned or challenged a belief or idea. What prompted your thinking? What was the outcome?
  • Reflect on something that someone has done for you that has made you happy or thankful in a surprising way. How has this gratitude affected or motivated you?
  • Discuss an accomplishment, event, or realization that sparked a period of personal growth and a new understanding of yourself or others.
  • Describe a topic, idea, or concept you find so engaging that it makes you lose all track of time. Why does it captivate you? What or who do you turn to when you want to learn more?
  • Share an essay on any topic of your choice. It can be one you’ve already written, one that responds to a different prompt, or one of your own design.

Read more blogs

What Rising Juniors and Rising Seniors Can Do To Best Maximize Their Summers

What Rising Juniors and Rising Seniors Can Do To Best Maximize Their Summers

The Return of SAT/ACT Requirements: A Vibe Shift Towards Testing

The Return of SAT/ACT Requirements: A Vibe Shift Towards Testing

Standardized Testing with Dyslexia: Why the Digital SAT Might Be the Best Option

Standardized Testing with Dyslexia: Why the Digital SAT Might Be the Best Option

New Common App essay prompt added for 2021-22 cycle | College Connection

Back in 1975, administrators from 15 colleges got together and decided to create one application that students could use to apply to any or all their colleges. This was the birth of the “Common App” which, as of 2021, is accepted by more than 900 colleges and universities across the United States. 

More than 1 million students are expected to use this year’s Common App – which will go live with its latest updates on Aug. 1 – to file about 6 million applications. But college-bound students can get started on the most time-consuming part of the application right now.

An integral component of the Common App is an essay of 250 to 650 words that is required by most of its participating colleges. Students have a choice of seven essay topics, one of which states, “Share an essay on any topic of your choice. It can be one you’ve already written, one that responds to a different prompt, or one of your own design.” So, the topic options are truly limitless!

College Connection: High school grades can earn huge monetary reward

College Connection: Know what 'test optional' really means

Most students, however, choose to respond to one of the six clearly defined essay questions. The first prompt asks students to share a story about their background, identity, interest or talent that is particularly meaningful. The second asks about a time when students faced a challenge, setback or failure and an explanation about what was learned from the experience. The third prompt has students reflecting on a time when they questioned or challenged a belief or idea.

The fourth prompt, which is a new option for the 2021-22 application cycle, ask students to describe someone or something that has made them happy or thankful and to describe how this gratitude has affected them. The fifth prompt asks students to discuss an accomplishment, event, or realization that sparked a period of personal growth. The sixth prompt instructs students to describe a topic, idea, or concept that totally captivates them.

In addition to the essay, the Common App includes a series of questions in several categories including parents’ educational history and current employment, and students’ SAT/ACT/AP test scores, senior year courses, high school activities, and intended college major.

On the “dashboard” of the Common App, students list all the colleges to which they want to apply. Most colleges have some additional questions, and some even have supplemental essays (although they are usually only looking for 100 to 250 words). Once all the questions are answered and essays are completed, students pay the application fee for each college online and press “submit.” Then the waiting game begins!

Susan Alaimo is the founder and director of Collegebound Review that, for the past 25 years, has offered PSAT/SAT® preparation and private college advising by Ivy League educated instructors. Visit CollegeboundReview.com or call 908-369-5362.

Admission By Design

Below is the full set of Common App essay prompts for 2022-2023. Students choose one of the seven prompts. Responses are limited to 650 words. 

common app essay prompts for 2021

Some students have a background, identity, interest, or talent that is so meaningful they believe their application would be incomplete without it. If this sounds like you, then please share your story.

common app essay prompts for 2021

The lessons we take from obstacles we encounter can be fundamental to later success. Recount a time when you faced a challenge, setback, or failure. How did it affect you, and what did you learn from the experience?

common app essay prompts for 2021

Reflect on a time when you questioned or challenged a belief or idea. What prompted your thinking? What was the outcome?

common app essay prompts for 2021

Reflect on something that someone has done for you that has made you happy or thankful in a surprising way. How has this gratitude affected or motivated you?

common app essay prompts for 2021

Discuss an accomplishment, event, or realization that sparked a period of personal growth and a new understanding of yourself or others.

common app essay prompts for 2021

Describe a topic, idea, or concept you find so engaging that it makes you lose all track of time. Why does it captivate you? What or who do you turn to when you want to learn more?

common app essay prompts for 2021

Share an essay on any topic of your choice. It can be one you’ve already written, one that responds to a different prompt, or one of your own design.

Facebook

Guide to the Common App Essays: Writing about Your Background (Prompt 1)

The 2021-22 Common Application's essay prompt 1 asks you to write about your background or identity. But what is it REALLY asking? Get the lowdown from College Essay Advisors Founder and Chief Advisor, Stacey Brook. She’ll break this prompt down into its basic building blocks and offer some insider tips and strategies for picking the perfect topic.

Common Application

Ivy Divider

Guide to the Common App Essays: Writing about Gratitude (Prompt 4)

The 2021-22 Common Application's fourth essay prompt (full text below) asks you to reflect on a time in your life when you have felt thankful. But how do you flip the script and talk about what others have done for you, instead of what you've done for others, in a meaningful way? Get the lowdown from College Essay Advisors' Founder and Chief Advisor, Stacey Brook. She’ll break this prompt down into its basic building blocks and offer some insider tricks and strategies for drafting your response.

Common Application Prompts: The Ultimate Breakdown

Common Application Prompts: The Ultimate Breakdown

The Common Application's personal statement is often the deciding factor between candidates with similar test scores, grades, and extracurriculars: but what makes a candidate's college essay stand out? In this video, we walk you through the seven Common App prompts, explaining what each question is really asking and providing helpful tips on what admissions is really looking for in response.

Guide to the Common App Essays: Writing about Your Background (Prompt 1)

The 2021-22 Common Application's essay prompt 1 asks you to write about your background or identity. But what is it REALLY asking? Get the lowdown from College Essay Advisors Founder and Chief Advisor, Stacey Brook. She’ll break this prompt down into its basic building blocks and offer some insider tips and strategies for picking the perfect topic.

Guide to the Common App Essays: Writing about Setbacks and Failure (Prompt 2)

Guide to the Common App Essays: Writing about Setbacks and Failure (Prompt 2)

The second essay prompt of the 2021-22 Common Application asks you to talk about how you approach challenges, obstacles, and even (GASP!) failures. Watch now!

Guide to the Common App Essays: Questioning a Belief or Idea (Prompt 3)

Guide to the Common App Essays: Questioning a Belief or Idea (Prompt 3)

The third essay prompt of the 2021-22 Common Application asks you to open up about a time when your opinion was unpopular. How can you write a powerful essay without polarizing readers who disagree with you? It's tricky -- but totally possible! Get the inside scoop from College Essay Advisors Founder and Chief Advisor, Stacey Brook.

Guide to the Common App Essays: Writing about Personal Growth (Prompt 5)

Guide to the Common App Essays: Writing about Personal Growth (Prompt 5)

The fifth essay prompt of the 2021-22 Common Application asks you to talk about a moment of personal growth. But what does admissions really want to hear about? What counts as a period of personal growth? CEA's Founder and Chief Advisor, Stacey Brook, gives you the lowdown on the Common App's fifth prompt in this video!

Guide to the Common App Essays: Sharing Your Passions and Obsessions (Prompt 6)

Guide to the Common App Essays: Sharing Your Passions and Obsessions (Prompt 6)

The sixth essay prompt of the 2021-22 Common Application asks you to write about the driving force behind your intellectual curiosity. But how can you tap into your inner nerd without going overboard? Get the insider scoop from College Essay Advisors Founder and Chief Advisor, Stacey Brook.

Guide to the Common App Essays: Tackling the Topic of Your Choice (Prompt 7)

Guide to the Common App Essays: Tackling the Topic of Your Choice (Prompt 7)

The seventh essay prompt of the 2021-22 Common Application is the legendary topic of your choice. If you're wondering whether you should choose to respond to this prompt or one of the other six, stay tuned. College Essay Advisors' Founder and Chief Advisor, Stacey Brook, is here to point you in the right direction and give you some valuable advice along the way.

  • Our Approach & Team
  • Undergraduate Testimonials
  • Postgraduate Testimonials
  • Where Our Students Get In
  • CEA Gives Back
  • Undergraduate Admissions
  • Graduate Admissions
  • Private School Admissions
  • International Student Admissions
  • Common App Essay Guide
  • Supplemental Essay Guide
  • Coalition App Guide
  • The CEA Podcast
  • YouTube Tutorials
  • Admissions Stats
  • Notification Trackers
  • Deadline Databases
  • College Essay Examples
  • Academy and Worksheets
  • Waitlist Guides
  • Get Started

Ivy Insight

2021-2022 Common App Essay Prompts Uncovered and Dissected (Part 1)

common app essay prompts for 2021

Subscribe now wherever you listen to podcasts:

common app essay prompts for 2021

In today’s episode of “College Admissions Real Talk”, Dr. Legatt discusses the new common app essay prompts in her first two-parter.

Have a question? Text 610-222-5762.

Subscribe now wherever you listen to podcasts: iTunes Libsyn YouTube Spotify iHeartRadio

VO: Welcome to College Admissions Real Talk with Dr. Aviva Legatt, a podcast for students seeking to get admitted to top-tier colleges. Each episode will feature an important tip for your college admission success, delivered with candor and love. If you’ve ever wanted to take a peek inside the mind of a college admissions officer, this is your chance. Have a question? Text Dr. Legatt at 610-222-5762. So, what’s your dream school? 

AL: Welcome to College Admissions Real Talk. This is Dr. Aviva Legatt, founder and Elite Admissions Expert at Ivy Insight and author of “Get Real and Get In”. Today, we’re going to be discussing the 2021-2022 common application essay prompts. These essay prompts were just released on February 16th in the past week, and you may be surprised to know that some students do get started on their college essays this early, even if they’re just juniors. One favor that’s done for you in this college application process is getting these prompts early. The truth is, your College process is going to sneak up on you and starting your process in February or March will be really, really helpful because then you’ll be able to get yourself organized when things get really busy in the summer and the fall. So, for the common application essay prompt for this year, the prompts are remaining mostly the same. With one exception: The common application is planning on retiring what they say is their seldom used option about solving a problem and replacing it with the following question: “Reflect on something that someone has done for you that has made you happy or thankful in a surprising way. How has this gratitude affected or motivated you”? The common application is also planning on keeping their optional COVID-19 question within the additional information section. As you may know, there’s also another space in common app where you can write additional information that’s not COVID-19 related, and that section is generally dedicated to your activities and expanding upon any information provided in the common app that you were unable to explain previously. So regarding the new prompt, the Common App says that it’s grounded in scientific research on gratitude and kindness and the benefit about writing about the good things happening in our lives as opposed to focusing on the bad things. As I’ve shared with you in our previous episode, The college admissions offices always want to see some kind of happy ending or lemonade that you’ve been able to make out of a difficult situation. This prompt is really giving voice to that positivity that you’ve been able to experience or generate even in these challenging times. So, I’m going to go over all the prompts for this year so you have a sense of how to approach them. But I definitely want to encourage you to give this one a thought, and I’ll go into a little bit more depth about what exactly I think they’re asking in this prompt. I have to say, I don’t love the wording of this prompt. I don’t think it’s very clear, but I’m going to try to explain it for you as best as I can. So the full set of as a prompt for 2021-2022 are as follows: the first one is some students have a background, identity, interest, or talent that is so meaningful, they believe their application would be incomplete without it. If this sounds like you, then please share your story. So, this prompt is a great option for students who can really speak to a personal, in-depth experience, whether it is something that they have done outside of the school environment. For example, if they’ve experienced a move or a loss or divorce something like that goals. That would be a great prompt for you to pick if you have a personal story like that. If you’re going to just describe something that you’ve been working on for a lot of years, like playing the piano, your karate, or going to your religious house of worship every week, that may or may not be the best choice here. It really depends on how you explain your learning and your growth from whatever experience you choose to talk about. I would recommend this prompt for students who can be very self-reflective about their life experiences and how they’ve shaped their goals for the future. The second prompt is “the lessons we take from obstacles we encounter can be fundamental to later success. Recount a time when you face a challenge, setback or failure. How did it affect you and what did you learn”? This is another example of a prompt where it’s really important to focus on the learning and the growth as opposed to the obstacles that you’ve encountered. The prompt can be a little bit tricky for folks because the first sentence talks about obstacles and so the trap that many people fall into is they spend too much time talking about the obstacles and not enough about the lessons that they’ve taken. So again, the question is, how did it affect you? What did you learn from the experience? So here it’s all about talking about how this experience has shaped your character and your goals for the future. Number three: “Reflect on a time when you questioned or challenged a belief or ideas. What prompted your thinking? What was the outcome”? So, this is a great prompt for somebody who is politically-inclined, religiously-inclined, has opinions and ideas about the world and is somebody who is not so stubborn or steadfast in them that they can’t grow or learn from an alternate perspective. So, I would suggest this as a prompt for anyone who is looking at a liberal arts, humanities major because it is a great chance to showcase your critical thinking skills as an applicant in this area. Again, I would only recommend this prompt to people who are willing and able to have their ideas and beliefs challenged because that is the ultimate goal is not to talk about your beliefs but to talk about how they can be changed and reshaped and reformed based on new information that you gain. Number four: “Reflect on something that someone has done for you has made you happy or thankful and a surprising way. How has this gratitude affected or motivated you”? So as promised, this is a new prompt and I want to spend a little bit of time on this because it is a prompt that no one has ever done before on the common application, so it’s going to require some extra thought and care and attention. I also want to shout out this prompt a little bit more than the others because my read into the Common Application’s press release is that they actually want to encourage students to answer this prompt. They spend a lot of time and money researching the effects of gratitude, and they want to use that research by finding out how you respond to this prompt. That’s my slightly cynical view, so forgive me here. But I do want to encourage you to at least give this prompt consideration, even if you decide ultimately that’s not the best fit for you. So, reflect on something that someone has done for you that has made you happy or thankful in a surprising way. Happiness is subjective. If I was going to edit this prompt, I would take out the word “happy”, and I would just use the word “thankful”. If you look at the research that the Common App site, they talk all about gratitude. “Gratitude” and “thankfulness” are synonyms of each other. So, when you are thankful and grateful, you can be happy, but that’s not always the outcome. So, for me to you, I would say edit out the word happy because it is a little bit of an extra word. It makes the prompt a little bit more confusing so let’s reread that. “Reflect on something that someone has done for you that has made you thankful in a surprising way”. I think that in this pandemic, of course, we’ve all gone through our own challenges. A lot of us have experienced gratefulness in places that we didn’t expect to. For example, for my own life. I am thankful that my son was able to be home with us for the first few months of pandemic. My son is three. He normally would have gone to daycare every single day, and over the last year we’ve had a ton more extra time with him because we have some extra help that we use around the house instead of sending him out. So as a family, we’ve been grateful to spend more time together, even though it’s been under a difficult circumstance. Now you may have a completely different story. Maybe your story is something like one of my students where that person was able to start their own publication online and felt motivated to connect with students from all over the world, not just from his school, to found and propel this publication into some degree of notoriety. Actually, the publication was acquired by a larger publication, so it was an incredibly productive endeavor and a really enriching one as well and certainly not one that he set out to do or expected to do but the opportunity was made available because of this challenging circumstance that we’ve been in. So, in my mind, that’s the surprise is you know that the rug has been pulled out from you in some way but there’s been something special that you’ve been able to make out of the situation or to gain from the situation.

VO: College Admissions Real Talk is hosted by Aviva Legatt, edited by Stephanie Carlin, and produced by Incontrera Consulting. I’m Caroline Stokes and this has been your daily boost of college admissions insight. Have a question? Text Dr. Legatt at 610-222-5762. For more information on Dr. Legatt and Ivy Insight visit www.ivyinsight.com. And you can pick up Dr. Legatt’s book, “Get Real and Get In”, at major retail outlets across the world. Insight out.

How to Write an Outstanding Common App Essay

Our best advice for impressing admissions officers with your common app essay..

You probably know that the Common App asks for a personal essay. It’s an essay that matters — almost 1000 schools accept it. (Most make it mandatory; for a few it’s optional, but we still recommend submitting it.)

How do I do well? You’ve got to eloquently cover the one or two experiences that show you’ll succeed in college and beyond. Sadly, that’s a tough trick to pull off. But we’ve got a few tips that can make a big difference.

Check out our free resources, including:

  • How to choose the best prompt , and avoid common pitfalls.
  • How to structure your essay , which lays out two strong formats that will help organize your thinking.

For one-on-one help with your essay, try us at Prompt . We can help you get started, or give you feedback before you submit your final draft, and everything else in between.

Common App Prompts 2020-2021

  • Some students have a background, identity, interest, or talent so meaningful they believe their application would be incomplete without it. If this sounds like you, please share your story.
  • The lessons we take from obstacles we encounter can be fundamental to later success. Recount a time when you faced a challenge, setback, or failure. How did it affect you, and what did you learn from the experience?
  • Reflect on a time when you questioned or challenged a belief or idea. What prompted your thinking? What was the outcome?
  • Describe a problem you’ve solved or a problem you’d like to solve. It can be an intellectual challenge, a research query, an ethical dilemma — anything of personal importance, no matter the scale. Explain its significance to you and what steps you took or could be taken to identify a solution.
  • Discuss an accomplishment, event, or realization that sparked a period of personal growth and a new understanding of yourself or others.
  • Describe a topic, idea, or concept you find so engaging it makes you lose all track of time. Why does it captivate you? What or who do you turn to when you want to learn more?
  • Share an essay on any topic of your choice. It can be one you’ve already written, one that responds to a different prompt, or one of your own design.

You choose one of the seven prompts.

Word limit: 650 words.

Which Common App prompts are the most popular?

This graph illustrates the choices made by over 1,500 Prompt clients. According to our data, two essays are by far the most popular — the “Background” essay (#1) and the “Accomplishment” essay (#5), with roughly 30% of our clients choosing each.

Interestingly, the most popular choice according to the Common App ’s own data is different. It’s the “choose your own topic” prompt (#7), which is only the fourth-most popular among our students. But, except for that, the Common App reports the same sequence as we do, with the “Accomplishment” essay (#5) also in second position, and the “Obstacle” essay (#2) also in third.

common app essay prompts for 2021

How to Answer the Common App Essay Prompts (2020-2021) – A Complete Breakdown of All 7 Prompts

#1 some students have a background, identity, interest, or talent that is so meaningful they believe their application would be incomplete without it. if this sounds like you, then please share your story., #2 the lessons we take from obstacles we encounter can be fundamental to later success. recount a time when you faced a challenge, setback, or failure. how did it affect you, and what did you learn from the experience, #3 reflect on a time when you questioned or challenged a belief or idea. what prompted your thinking what was the outcome, #4 describe a problem you’ve solved or a problem you’d like to solve. it can be an intellectual challenge, a research query, an ethical dilemma anything that is of personal importance, no matter the scale. explain its significance to you and what steps you took or could be taken to identify a solution., #5 discuss an accomplishment, event, or realization that sparked a period of personal growth and a new understanding of yourself or others., #6 describe a topic, idea, or concept you find so engaging that it makes you lose all track of time. why does it captivate you what or who do you turn to when you want to learn more, #7 share an essay on any topic of your choice. it can be one you’ve already written, one that responds to a different prompt, or one of your own design., leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

JOIN THE ROSTRUM COMMUNITY

Contact us today.

common app essay prompts for 2021

What are your chances of acceptance?

Calculate for all schools, your chance of acceptance.

Duke University

Your chancing factors

Extracurriculars.

common app essay prompts for 2021

The 2020-2021 Common Application Essay Prompts Are Here

common app essay prompts for 2021

The Common Application has just announced that the essay prompts will be the same as those used in 2019-2020. Every cycle, the Common App offers six prompts that students can use to brainstorm great essay topics. There is also a seventh prompt to write on any topic of your choosing.

New to college applications? Keep reading this article to learn why these prompts matter, when to start your essay, and how you can be preparing for college applications now.

2020-2021 Common Application Essay Prompts

Here are the essay prompts from last year, which will be used again in this upcoming application cycle. Since we have worked with these prompts extensively in the past, we can confirm that these can inspire some pretty great essays.

Prompt #1: Some students have a background, identity, interest, or talent that is so meaningful they believe their application would be incomplete without it. If this sounds like you, then please share your story.

Prompt #2: The lessons we take from obstacles we encounter can be fundamental to later success. Recount a time when you faced a challenge, setback, or failure. How did it affect you, and what did you learn from the experience?

Prompt #3: Reflect on a time when you questioned or challenged a belief or idea. What prompted your thinking? What was the outcome?

Prompt #4: Describe a problem you’ve solved or a problem you’d like to solve. It can be an intellectual challenge, a research query, an ethical dilemma – anything that is of personal importance, no matter the scale. Explain its significance to you and what steps you took or could be taken to identify a solution.

Prompt #5: Discuss an accomplishment, event, or realization that sparked a period of personal growth and a new understanding of yourself or others.

Prompt #6: Describe a topic, idea, or concept you find so engaging that it makes you lose all track of time. Why does it captivate you? What or who do you turn to when you want to learn more?

Prompt #7: Share an essay on any topic of your choice. It can be one you’ve already written, one that responds to a different prompt, or one of your own design.

What is the Purpose of the Common App Essay?

By the time you apply to college, you have gathered a long list of grades, test scores, and extracurricular accomplishments. But, while admissions officers are interested in seeing what you have done with your high school years, what they really want to know is who you are. When people read your college application, they want to know, “Is this someone who will succeed at our school?”

In your essay, you get to tell a story or two that introduces admissions officers to you as a candidate for their school. In your introduction, you want to come across as smart, thoughtful, and mature. Your essay should be deeply personal, error-free, and written in language that demonstrates you are prepared for the academic challenge of college.

What you choose to write about does not matter nearly as much as how you address the topic. We have seen winning essays on the alarm clock, Robotics Club, death, and home cooking. The commonality that all these essays shared was that they portrayed the author as a thoughtful person of good character. Just about any strong college essay will answer these four questions:

  • Why Am I Here?
  • What is Unique About Me?
  • What Matters to Me?

While the essay can be about any topic, the Common App provides a few suggestions to help students start out on the right foot. Whether you write to a prompt or brainstorm a fresh idea, make sure your essay addresses these for key questions. Before you begin writing essays, we recommend checking out our post How to Write the Common Application Essays 2019-2020 .

Ways to Prepare for College Applications Now

We recommend waiting until late summer or early fall of your Senior year before you begin writing personal essays. Those few months actually make a big difference in how students reflect on their lives and what anecdotes they choose to highlight. 

If you are eager to get a head start on the college application process, here are some goals you can shoot for now as a Junior:

1. Build an epic extracurricular profile.

If your goal is to ace your college applications, the single most important thing you can be doing (besides keeping your grades up) is to cultivate a crowning achievement of your extracurricular profile. Your Junior summer is your chance to demonstrate that you care deeply about these out-of-school interests. It’s your opportunity to show that you know how to maximize available resources to create something meaningful.

Take time as a Junior to think about what impact you want to have outside of the classroom. Think of positive experiences you have had leading up to this point when it comes to ECs. The more substantial of an impact your extracurricular endeavors make, the more competitive your application on the whole will be. 

Impact will look different for everybody. Some students have breadth of impact by planning a large event. Others accomplish depth of impact through a service project that supports a few people in a big way. Still others trailblaze, taking the first steps in the uncharted territories of an extracurricular activity that few students in their community pursue.

If you’re concerned about your extracurricular profile because you haven’t developed it much up until this point, there are still steps you can take to improve your ECs. See our post How to Improve Your Extracurriculars Junior and Senior Year for tips on how to make the most of the time you have left.

For more advice on how to craft a successful extracurricular profile, check out these CollegeVine posts:

Breaking Down the 4 Tiers of Extracurricular Activities

Your Complete List of Extracurricular Activities

Your Ultimate Guide to Summer Programs for High Schoolers

A Guide to Extracurricular Activities: Grade 11

2. Ask 2-3 people to write your letters of recommendation.

Think carefully about who should write these letters for you, and give them plenty of advance warning before your earliest deadline (at least one month). You should ideally ask teachers who know you both as a student, and in an extracurricular context; for example, your math teacher and debate advisor could be a good pick. This isn’t always possible of course, so you should always just aim for teachers who know you well and can speak very positively of you. You should also try to ask teachers you’ve had recently.

You can learn more about connecting with recommenders by checking out these related articles:

How to Pick Which Teachers to Ask for Letters of Recommendation

9 Rules for Requesting Letters of Recommendation from Teachers

What Makes a Good Recommendation Letter?

Should You Submit an Additional Letter of Recommendation?

A Step-by-Step Guide to Your Recommendation Letters

3. Complete all standardized testing.

While you will have opportunities to take your SAT, ACT, and SAT Subject Tests in your Senior fall, it pays to wrap up this process as a Junior. You will want the time your Senior year to focus on essays and extracurricular activities.

Here are a few additional resources for those looking to wrap up their standardized testing by the end of their Junior year:

When Is the Best Time to Take the SAT?

SAT vs. ACT: Everything You Need to Know

ACT Score Range: What Is a Good ACT Score? A Bad ACT Score?

Why Should You Take SAT Subject Tests?

Complete List of SAT Subject Tests

4. Familiarize yourself with the Common App and begin brainstorming essay ideas.

The Common App allows you to build one application and send it to hundreds of schools. Filling out the form is fairly straightforward, and most sections take less than a half-hour to complete. You can create an account today, and the Common App will let you roll over any information you have submitted when the 2020-2021 application cycle opens in August.

Beginning this process early ensures that little details will not slip through the cracks. For example, one student of ours practiced piano for ten years but almost forgot to note that extracurricular on her application. Luckily, she had been updating the form for months, so when she remembered this important extracurricular activity, it was easy for her to log on and update her file.

Additionally, the Common App lets you start brainstorming essay ideas. We recommend keeping a journal or running Google Doc of ideas so that you have a plethora of good ideas to choose from once it is time to start writing your essays. For more on brainstorming essay topics and the Common App in general, check out these links:

A User’s Guide to the Common App

What Is a Personal College Essay?

How Important Is the College Essay?

Where to Begin? 3 Personal Essay Brainstorming Exercises

Why This Common App Essay Worked: Prompt 2: “The Lessons We Take…”

Want access to expert college guidance — for free? When you create your free CollegeVine account, you will find out your real admissions chances, build a best-fit school list, learn how to improve your profile, and get your questions answered by experts and peers—all for free. Sign up for your CollegeVine account today to get a boost on your college journey.

Related CollegeVine Blog Posts

common app essay prompts for 2021

Trying to figure out if the common app essay prompts are any different this year? Well, we’re here to answer all of the questions you may have. We’re going to give you, a college student to be, as much guidance on how to approach the common app essay as possible. Learn what the prompts are, what the most popular one is, and how to use them effectively.

Are the essay prompts 2022-23 different from the Common App essay prompts 2021-22?

The common app essay prompts are here and there are no surprises this year! All of the prompts are exactly the same as last year. There are seven essay topics to choose from. These essay prompts are here to help you navigate a 650-word essay that complies with different topics, tones, styles, and subjects.

What’s new in Common App essay prompts for 2022-23?

The common app college essay plays a vital role in your application process to get into elite and ivy league colleges. As stated before, there are no new prompts this year. The common app essay prompts are as follows:

  • Some students have a background, identity, interest, or talent that is so meaningful they believe their application would be incomplete without it. If this sounds like you, then please share your story.
  • The lessons we take from obstacles we encounter can be fundamental to later success. Recount a time when you faced a challenge, setback, or failure. How did it affect you, and what did you learn from the experience?
  • Reflect on a time when you questioned or challenged a belief or idea. What prompted your thinking? What was the outcome?
  • Reflect on something that someone has done for you that has made you happy or thankful in a surprising way. How has this gratitude affected or motivated you?
  • Discuss an accomplishment, event, or realization that sparked a period of personal growth and a new understanding of yourself or others.
  • Describe a topic, idea, or concept you find so engaging that it makes you lose all track of time. Why does it captivate you? What or who do you turn to when you want to learn more?
  • Share an essay on any topic of your choice. It can be one you’ve already written, one that responds to a different prompt, or one of your own design.

college application help

College Admissions 101: Common App vs. Coalition App

Click here to read more

Which Common App prompts are the most popular?

The common application essay prompts that were most popular according to common app analytics are prompt 7: the choose your own topic, prompt 5: Explain an accomplishment, and coming in third prompt 2: a setback or failure.

The admission officers are finding that these prompts are usually the most common because they can be very relatable topics. Though this is your chance to stand out among the other applicants, so be original with your personal statement.

Should you choose the most popular prompts?

When you are choosing the new prompt for your personal essay you should consider what will make you stand out. Choosing the most popular prompt is less likely for you to impress the admissions committee. Go through the common app prompts and choose the best one for you based on your life experiences.

All 2022-2033 class college essay prompts in Common App

Now we’re going to go through the college application essay prompts and answer common app essay questions you may have. Our goal is to give you the additional information that you are looking for.

Prompt 1: Some students have a background, identity, interest, or talent that is so meaningful they believe their application would be incomplete without it. If this sounds like you, then please share your story.

This prompt offers an opportunity to engage with your favorite extracurricular or academic subject, and it allows you to weave a narrative that displays personal growth in that area. An essay that displays your personality and a unique interest can be attention-grabbing.

Prompt 2: The lessons we take from obstacles we encounter can be fundamental to later success. Recount a time when you faced a challenge, setback, or failure. How did it affect you, and what did you learn from the experience?

This prompt lends itself to consideration of what facets of your personality allow you to overcome adversity. While it’s okay to choose a relatively mundane “failure” such as not winning an award, another (perhaps more powerful) tactic is to write about a foundational failure and assess its impact on your development thereafter.

Prompt 3: Reflect on a time when you questioned or challenged a belief or idea. What prompted your thinking? What was the outcome?

This prompt is the hardest one to answer because most high schoolers haven’t participated in the types of iconoclastic protests against societal ills that turn themselves to an awe-inspiring response.

An alternative here could be to discuss a time that you went against social norms, whether it was by becoming friends with someone who seemed like an outcast or by proudly showing off a geeky passion.

Prompt 4: Reflect on something that someone has done for you that has made you happy or thankful in a surprising way. How has this gratitude affected or motivated you?

While this prompt may seem to be asking a simple question, your answer has the potential to provide deep insights into who you are to the admissions committee. Explaining what you are grateful for can show them your culture, your community, your philosophical outlook on the world, and what makes you agitated.

Prompt 5: Discuss an accomplishment, event, or realization that sparked a period of personal growth and a new understanding of yourself or others.

This prompt is expansive in that you can choose any accomplishment, event, or realization that sparked personal growth or new understanding. A fairly simple prompt that you have the chance to make your own and impress the college admissions officers.

Prompt 6: Describe a topic, idea, or concept you find so engaging that it makes you lose all track of time. Why does it captivate you? What or who do you turn to when you want to learn more?

This prompt is great if you want to expand and deepen a seemingly small or simple idea, topic, or concept. For example, you could talk about trees. Maybe you grew up in the country or would always go to the park. This can translate to a deeper meaning, your love for nature grows and you end up wanting to be an environmental biologist.

Prompt 7: Share an essay on any topic of your choice. It can be one you’ve already written, one that responds to a different prompt, or one of your own design.

This prompt allows you to express what you want to express if it doesn’t align directly with the other prompts. While this prompt is very open-ended, it doesn’t mean you can adapt any essay you’ve written and think it will suffice. Make sure to do some brainstorming and incorporate an out-of-the-box essay that will help you stand out.

How many Common App essays are required?

When you use the common app, you only have to write one essay based on the prompts above and it will qualify for all the colleges that are associated with the app. This will be most of the colleges that you apply to, but double-check before applying!

What makes a great Common App essay?

The best way to make your essay great is to ensure that you are making a deep personal connection. Think about the people who will be reading your essay; These college admission officers are reading hundreds of essays so make sure yours is the one that stands out. If they feel connected to your essay, you are most likely going to get accepted.

Key takeaways about answering the Common App questions

Now that you know the common app essay prompts are the same as last year, you can conduct proper research to make yours the best one yet. Remember to stay personal and original within your writing and follow our other essay tips to help you out.

Need professional college application essay help? Contact Prepory

Our college admissions experts are here to guide you from where you are to where you should be. Through our comprehensive curriculum, individualized coaching, and online workshops, you are set for success as soon as you connect with us.

Are Common App essay prompts the same every year?

For the most part, the common app essay prompts stay the same. However, every few years they change out a couple of the prompts.

What happens if you go over the Common App essay prompts word limit?

There is no strict word limit when you write a common app essay although they do recommend that you stay around 650 words. If you are to go over the word count, the admissions officers will continue to read, but they may not finish your essay if it is too long.

What should you avoid in your Common App essay?

The main thing that you should avoid when writing your personal statement essay is to not rehash your academic and extracurricular accomplishments. Avoid starting with a preamble and ending with a “happily ever after” conclusion.

Can you lie in your college essay?

While writing your essay, there's no need to stretch the truth. The essay is your chance to let your voice come through your application: don't waste it on lies. Your first thought when brainstorming ideas should not be about how legendary or heartbreaking your essay can be.

Are college essays kept private? Who can see my Common App college essay?

Yes, they are kept private. The only person that can see your essay is the person reviewing your application, and they are bound legally to keep your information private.

  • April 5, 2021
  • College Admissions , Common App

How To Answer Common App Essay Prompts: 2022-23

common app essay prompts for 2021

Contact a Prepory college admissions coach and start your college admissions journey.

During our initial consultation, we will: 

  • Assess your student’s applicant profile and higher education goals 
  • Provide detailed information about our services and programming
  • Share tips on how to navigate the U.S. college admissions process 

Let's get started!

common app essay prompts for 2021

Land your next great job with a Prepory career coach!

Let us help you advance your career, Identify new opportunities, participate in mock interviews, build, thrive, grow, and land your dream job.

Subscribe to our blog!

Follow us on social media

Want to get admitted to your dream school or accelerate your career?

College Admissions

Career coaching.

(929) 244-3365 [email protected] 12555 Orange Drive, Suite 100A, Davie, FL 33330

common app essay prompts for 2021

Copyright © 2023  Prepory Coaching Group LLC.  All Rights Reserved.

Slide

Ready to take the next step towards college admissions or career success?

Book your free consultation.

How to Nail the Common App Essay (and choose the best prompt for you)

We walk you through how to write a brilliant common app essay for 2020-2021..

The Common Application allows high school seniors to apply to any of the almost 900 colleges and universities that use it. The prompts serve to guide applicants in reflecting on their own experiences and identifying the most appropriate ones to include in their personal statement.

Want help with your Common App essay? Whether you haven’t even started or whether you’re looking for feedback on a draft you’ve already written, Prompt is here to help.

Common App Prompts 2020-2021

  • Some students have a background, identity, interest, or talent so meaningful they believe their application would be incomplete without it. If this sounds like you, please share your story.
  • The lessons we take from obstacles we encounter can be fundamental to later success. Recount a time when you faced a challenge, setback, or failure. How did it affect you, and what did you learn from the experience?
  • Reflect on a time when you questioned or challenged a belief or idea. What prompted your thinking? What was the outcome?
  • Describe a problem you’ve solved or a problem you’d like to solve. It can be an intellectual challenge, a research query, an ethical dilemma — anything of personal importance, no matter the scale. Explain its significance to you and what steps you took or could be taken to identify a solution.
  • Discuss an accomplishment, event, or realization that sparked a period of personal growth and a new understanding of yourself or others.
  • Describe a topic, idea, or concept you find so engaging it makes you lose all track of time. Why does it captivate you? What or who do you turn to when you want to learn more?
  • Share an essay on any topic of your choice. It can be one you’ve already written, one that responds to a different prompt, or one of your own design.

Students choose to respond to one of the seven prompts. The essay has a word limit of 650 words.

How should you choose and write your Common App essays?

Here are some resources:

  • This article helps you select the right Common App essay topic that is both compelling and unique for your essay
  • This article teaches you how to structure your essay to make any topic work within the Common App essay length

Which Common App prompts are the most popular?

This graph illustrates the choices made by over 1,500 Prompt clients. According to our data, two essays are by far the most popular — the “Background” essay (#1) and the “Accomplishment” essay (#5), with roughly 30% of our clients choosing each.

Interestingly, the most popular choice according to the Common App ’s own data is different. It’s the “choose your own topic” prompt (#7), which is only the fourth-most popular among our students. But, except for that, the Common App reports the same sequence as we do, with the “Accomplishment” essay (#5) also in second position, and the “Obstacle” essay (#2) also in third.

common app essay prompts for 2021

The 2021-22 Common Application Essay Prompts

Tips and Guidance for the 7 Essay Options on the New Common Application

  • College Admissions Process
  • College Profiles
  • College Rankings
  • Choosing A College
  • Application Tips
  • Essay Samples & Tips
  • Testing Graphs
  • College Financial Aid
  • Advanced Placement
  • Homework Help
  • Private School
  • College Life
  • Graduate School
  • Business School
  • Distance Learning
  • Ph.D., English, University of Pennsylvania
  • M.A., English, University of Pennsylvania
  • B.S., Materials Science & Engineering and Literature, MIT

For the 2021-22 application cycle, the Common Application  essay prompts remain unchanged from the 2020-21 cycle with the exception of an all new option #4. As in the past, with the inclusion of the popular "Topic of Your Choice" option, you have the opportunity to write about anything you want to share with the folks in the admissions office.

The current prompts are the result of much discussion and debate from the member institutions who use the Common Application. The essay length limit stands at 650 words (the minimum is 250 words), and students will need to choose from the seven options below. The essay prompts are designed to encourage reflection and introspection. The best essays focus on self-analysis, rather than spending a disproportionate amount of time merely describing a place or event. Analysis, not description, will reveal the critical thinking skills that are the hallmark of a promising college student. If your essay doesn't include some self-analysis, you haven't fully succeeded in responding to the prompt.

According to the folks at the Common Application , in the 2018-19 admissions cycle, Option #7 (topic of your choice) was the most popular and was used by 24.1% of applicants. The second most popular was Option #5 (discuss an accomplishment) with 23.7% of applicants. In third place was Option #2 on a setback or failure. 21.1% of applicants chose that option.

From the Admissions Desk

"While the transcript and grades will always be the most important piece in the review of an application, essays can help a student stand out. The stories and information shared in an essay are what the Admissions Officer will use to advocate for the student in the admissions committee."

–Valerie Marchand Welsh Director of College Counseling, The Baldwin School Former Associate Dean of Admissions, University of Pennsylvania

Always keep in mind why colleges are asking for an essay: they want to get to know you better. Nearly all selective colleges and universities (as well as many that aren't overly selective) have holistic admissions, and they consider many factors in addition to numerical measures such as grades and standardized test scores. Your essay is an important tool for presenting something you find important that may not come across elsewhere in your application. Make sure your essay presents you as the type of person a college will want to invite to join their community.

Below are the seven options with some general tips for each:

Option #1  

Some students have a background, identity, interest, or talent that is so meaningful they believe their application would be incomplete without it. If this sounds like you, then please share your story.

"Identity" is at the heart of this prompt. What is it that makes you you? The prompt gives you a lot of latitude for answering the question since you can write a story about your "background, identity, interest, or talent." Your "background" can be a broad environmental factor that contributed to your development such as growing up in a military family, living in an interesting place, or dealing with an unusual family situation. You could write about an event or series of events that had a profound impact on your identity. Your "interest" or "talent" could be a passion that has driven you to become the person you are today. However you approach the prompt, make sure you are inward looking and explain how and why  the story you tell is so meaningful. 

  • See more Tips and Strategies for Essay Option #1
  • Sample essay for option #1: "Handiwork" by Vanessa
  • Sample essay for option #1: "My Dads" by Charlie
  • Sample essay for option #1: "Give Goth a Chance"
  • Sample essay for option #1: "Wallflower"

Option #2  

The lessons we take from obstacles we encounter can be fundamental to later success. Recount a time when you faced a challenge, setback, or failure. How did it affect you, and what did you learn from the experience?

This prompt may seem to go against everything that you've learned on your path to college. It's far more comfortable in an application to celebrate successes and accomplishments than it is to discuss setbacks and failure. At the same time, you'll impress the college admissions folks greatly if you can show your ability to learn from your failures and mistakes. Be sure to devote significant space to the second half of the question—how did you learn and grow from the experience? Introspection and honesty are key with this prompt.

  • See more Tips and Strategies for Essay Option #2
  • Sample essay for option #2: "Striking Out" by Richard
  • Sample essay for option #2: "Student Teacher" by Max

Reflect on a time when you questioned or challenged a belief or idea. What prompted your thinking? What was the outcome?

Keep in mind how open-ended this prompt truly is. The "belief or idea" you explore could be your own, someone else's, or that of a group. The best essays will be honest as they explore the difficulty of working against the status quo or a firmly held belief. The answer to the final question about the "outcome" of your challenge need not be a success story. Sometimes in retrospection, we discover that the cost of an action was perhaps too great. However you approach this prompt, your essay needs to reveal one of your core personal values. If the belief you challenged doesn't give the admissions folks a window into your personality, then you haven't succeeded with this prompt.

  • See more Tips and Strategies for Essay Option #3
  • Sample essay for option #3: "Gym Class Hero" by Jennifer

Reflect on something that someone has done for you that has made you happy or thankful in a surprising way. How has this gratitude affected or motivated you?

Here, again, the Common Application gives you a lot of options for approaching the question since it is entirely up to you to decide what the "something" and "someone" will be. This prompt was added to the Common Application in the 2021-22 admissions cycle in part because it gives students the opportunity to write something heartfelt and uplifting after all the challenges of the previous year. The best essays for this prompt show that you are a generous person who recognizes the contributions others have made to your personal journey. Unlike many essays that are all about "me, me, me," this essay shows your ability to appreciate others. This type of generosity is an important character trait that schools look for when inviting people to join their campus communities.

  • See more Tips and Strategies for Essay Option #4

Discuss an accomplishment, event, or realization that sparked a period of personal growth and a new understanding of yourself or others.

This question was reworded in 2017-18 admissions cycle, and the current language is a huge improvement. The prompt use to talk about transitioning from childhood to adulthood, but the new language about a "period of personal growth" is a much better articulation of how we actually learn and mature (no single event makes us adults). Maturity comes as the result of a long train of events and accomplishments (and failures). This prompt is an excellent choice if you want to explore a single event or achievement that marked a clear milestone in your personal development. Be careful to avoid the "hero" essay—admissions offices are often overrun with essays about the season-winning touchdown or brilliant performance in the school play (see the list of bad essay topics for more about this issue). These can certainly be fine topics for an essay, but make sure your essay is analyzing your personal growth process, not bragging about an accomplishment.

  • See more Tips and Strategies for Essay Option #5
  • Sample essay for option #5: "Buck Up" by Jill

Describe a topic, idea, or concept you find so engaging that it makes you lose all track of time. Why does it captivate you? What or who do you turn to when you want to learn more?

This option was entirely new in 2017, and it's a wonderfully broad prompt. In essence, it's asking you to identify and discuss something that enthralls you. The question gives you an opportunity to identify something that kicks your brain into high gear, reflect on why it is so stimulating, and reveal your process for digging deeper into something that you are passionate about. Note that the central words here—"topic, idea, or concept"—all have rather academic connotations. While you may lose track of time when running or playing football, sports are probably not the best choice for this particular question.

  • See more Tips and Strategies for Essay Option #6

Share an essay on any topic of your choice. It can be one you've already written, one that responds to a different prompt, or one of your own design.

The popular "topic of your choice" option had been removed from the Common Application between 2013 and 2016, but it returned again with the 2017-18 admissions cycle. Use this option if you have a story to share that doesn't quite fit into any of the options above. However, the first six topics are extremely broad with a lot of flexibility, so make sure your topic really can't be identified with one of them. Also, don't equate "topic of your choice" with a license to write a comedy routine or poem (you can submit such things via the "Additional Info" option). Essays written for this prompt still need to have substance and tell your reader something about you. Cleverness is fine, but don't be clever at the expense of meaningful content.

  • See more Tips and Strategies for Essay Option #7
  • Sample essay for option #7: "My Hero Harpo" by Alexis
  • Sample essay for option #7: "Grandpa's Rubik's Cube"

Final Thoughts

Whichever prompt you chose, make sure you are looking inward. What do you value? What has made you grow as a person? What makes you the unique individual the admissions folks will want to invite to join their campus community? The best essays spend significant time with self-analysis rather than merely describing a place or event.

The folks at The Common Application have cast a wide net with these questions, and nearly anything you want to write about could fit under at least one of the options. If your essay could fit under more than one option, it really doesn't matter which one you choose. Many admissions officers, in fact, don't even look at which prompt you chose—they just want to see that you have written a good essay.

  • Tips for Writing an Essay on an Event That Led to Personal Growth
  • Tips for the Pre-2013 Personal Essay Options on the Common Application
  • Common Application Essay Option 2 Tips: Learning from Failure
  • Topic of Your Choice: Common Application Essay Tips
  • Common Application Essay Option 4—Gratitude
  • Common Application Essay Option 3 Tips: Challenging a Belief
  • Common Application Essay on a Meaningful Place
  • A Sample Essay for Common Application Option #7: Topic of Your Choice
  • 2020-21 Common Application Essay Option 4—Solving a Problem
  • Common Application Essay, Option 1: Share Your Story
  • "Grandpa's Rubik's Cube"—Sample Common Application Essay, Option #4
  • "Gym Class Hero" - a Common Application Essay Sample for Option #3
  • 5 Tips for a College Admissions Essay on an Important Issue
  • Common Application Essay Option 6: Losing Track of Time
  • Tips for an Application Essay on a Significant Experience
  • "My Dads" - Sample Common Application Essay for Option #1
  • Meta Quest 4
  • Google Pixel 9
  • Google Pixel 8a
  • Apple Vision Pro 2
  • Nintendo Switch 2
  • Samsung Galaxy Ring
  • Yellowstone Season 6
  • Recall an Email in Outlook
  • Stranger Things Season 5

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

ChatGPT: the latest news, controversies, and tips you need to know

ChatGPT has continued to dazzle the internet with AI-generated content, morphing from a novel chatbot into a piece of technology that is driving the next era of innovation. No tech product in recent memory has sparked as much interest, controversy, fear, and excitement.

What is ChatGPT?

How to use chatgpt, how to use the chatgpt iphone and android apps, is chatgpt free to use, who created chatgpt, what do the chatgpt errors mean, latest chatgpt controversies, can chatgpt be detected, common uses for chatgpt, what are chatgpt plugins, is there a chatgpt api.

  • What’s the future of ChatGPT and GPT-5?

ChatGPT alternatives worth trying

Other things to know about chatgpt.

If you’re just now catching on, it’d be fair to wonder what the fuss is all about. You can try it out for yourself for free (or use the official free iOS app ), but here’s the detailed guide you’ve been looking for — whether you’re worried about an AI apocalypse or are just looking for an intro guide to the app.

ChatGPT is a natural language AI chatbot . At its most basic level, that means you can ask it any question, and it will generate an answer.

Using the ChatGPT chatbot itself is fairly simple, as all you have to do is type in your text and receive information. The key here is to be creative and see how your ChatGPT responds to different prompts. If you don’t get the intended result, try tweaking your prompt or giving ChatGPT further instructions. ChatGPT knows the context of previous questions you ask, so you can refine from there rather than starting over fresh every time.

For example, starting with “Explain how the solar system was made” will give a more detailed result with more paragraphs than “How was the solar system made,” even though both inquiries will give fairly detailed results. Take it a step further by giving ChatGPT more guidance about style or tone, saying “Explain how the solar system was made as a middle school teacher.”

  • The free version of ChatGPT just got much more powerful
  • OpenAI strikes major deal with News Corp to boost ChatGPT
  • GPT-4o: What the latest ChatGPT update can do and when you can get it

You also have the option for more specific inputting requests for an essay with a specific number of paragraphs or a Wikipedia page. We got an extremely detailed result with the request “write a four-paragraph essay explaining Mary Shelley’s Frankenstein.” And remember, ChatGPT is great at making tweaks to previous answers, so you can always ask for more detail, ask it to rewrite something, or ask it further questions.

To see what it can do, try using ChatGPT in daily life or work activities to see how it can help. Ask it to write emails, craft business proposals, fun date night ideas, or even a best man’s speech. So long as it doesn’t break some of the explicit or illegal content rules, the generator will do its best to fulfill the commands. It’s certainly is potential for ChatGPT to begin filling in gaps with incorrect data. As OpenAI notes, these instances are rare, but AI “hallucinations” certainly do happen . The company also notes that ChatGPT, which uses the GPT-3.5 LLM (large language model), currently has “limited knowledge of world events after 2021.” For more recent knowledge of the world, consider using another tool like Bing Chat .

However, OpenAI recently announced that ChatGPT Plus subscriber, who have access to the GPT-4 model, will be able to search the web for up-to-date information .

Even so, you have the option to input queries continuously until you close your browser or reset the thread to clear your previous requests. These chats are then saved in conversations in the sidebar, even automatically naming the chat. From there, you can manage these chats, renaming or deleting them as needed. You can even “hide” specific chats if needed .

You also have the option to use ChatGPT in dark mode or light mode.

Unlike Bing Chat , which can now generate images with Bing Image Creator and receive images as prompts for questions, ChatGPT only provides text outputs. In September 2023, however, OpenAI added the ability for you to use an image or voice as an input for your prompt. It’s currently only available to ChatGPT Plus subscribers.

As opposed to a simple voice assistant like Siri or Google Assistant , ChatGPT is built on what is called an LLM (Large Language Model). These neural networks are trained on huge quantities of information from the internet for deep learning — meaning they generate altogether new responses, rather than just regurgitating specific canned responses. They’re not built for a specific purpose like chatbots of the past — and they’re a whole lot smarter.

This is implied in the name of ChatGPT, which stands for Chat Generative Pre-trained Transformer. In the case of the current version of ChatGPT, it’s based on the GPT-3.5 LLM. The model behind ChatGPT was trained on all sorts of web content including websites, books, social media, news articles, and more — all fine-tuned in the language model by both supervised learning and RLHF (Reinforcement Learning From Human Feedback). OpenAI says this use of human AI trainers is really what makes ChatGPT stand out.

First, go to chat.openai.com . If it’s your first time, you’ll need to set up a free account with OpenAI before getting started. You have the option of choosing an easy login with a Google or Microsoft account, or just entering your email address. You’ll be asked next to enter a phone number ; however, keep in mind that you cannot use a virtual phone number (VoIP) to register for OpenAI. You will then receive a confirmation number, which you will enter on the registration page to complete the setup.

You’ll see some basic rules about ChatGPT, including potential errors in data, how OpenAI collects data, and how users can submit feedback — all of which have some wondering about whether or not ChatGPT is safe to use . Once you’re through that, you know you have successfully registered. You’re in!

After many months of anticipation, OpenAI has finally launched an official iOS app that you can go and download today . The app quickly topped half a million downloads in less than a week and is becoming available in an increasing number of countries .

Instructions for using it aren’t dissimilar to the ChatGPT web application. You do get an extra option for signing in using your Apple ID account, but it otherwise functions nearly identically to the web app — just type in your question and let the conversation begin.

The clean interface shows your conversation with GPT in a straightforward manner, hiding the chat history and settings behind the menu in the top right.

For those who are paying for ChatGPT Plus, the app lets you toggle between GPT-3.5 and GPT-4 too. You can even use the microphone to chat with ChatGPT over voice.

As our mobile editor noted in his experience with the app, it still doesn’t provide a connection to the internet like chatbots like Bing Chat and Perplexity .

Users have been asking for Android support for months, and now, a ChatGPT Android app is finally available . You can find it in the Google Play Store, but it’s limited to certain countries at the moment, including the U.S., India, and Brazil.

Some devices go beyond just the app, too. For instance, the Infinix Folax is an Android phone that integrated ChatGPT throughout the device. Instead of just an app, the phone replaces the typical smart assistant (Google Assistant) with ChatGPT.

Yes, the basic version of ChatGPT is completely free to use. There’s no limit to how much you can use ChatGPT in a day, though there is a word and character limit for responses .

It’s not free for OpenAI to continue running it, of course. Initial estimates are currently that OpenAI spends around $3 million per month to continue running ChatGPT, which is around $100,000 per day. A report from April indicated that the price of operation is closer to $700,000 per day .

Beyond the cost of the servers themselves, some egregious information has recently come out about what else has been done to train the language model against producing offensive content.

OpenAI also has a premium version of its chatbot, called ChatGPT Plus . It costs $20 a month but provides access even during peak times, faster responses, and first access to new features like GPT-4 .

ChatGPT was created by an organization called OpenAI, a San Francisco-based AI research lab. The organization started as a non-profit meant for collaboration with other institutions and researchers, funded by high-profile figures like Peter Thiel and Elon Musk.

OpenAI later became a for-profit company in 2019 and is now led by its CEO, Sam Altman. It runs on Microsoft’s Azure system infrastructure and is powered by Nvidia’s GPUs, including the new supercomputers just announced this year . Microsoft has invested heavily in OpenAI too, starting in 2019.

Many people attempting to use ChatGPT have been getting an “at capacity” notice when trying to access the site . It’s likely behind the move to try and use unofficial paid apps, which have already flooded app stores  and scammed thousands into paying for a free service.

Because of how much ChatGPT costs to run, it seems as if OpenAI has been limiting access when its servers are “at capacity.” It can take as long as a few hours to wait out, but if you’re patient, you’ll get through eventually. Of all the problems facing ChatGPT right now, this had been the biggest hurdle for keeping people from using it more. In some cases, demand has been so high that ChatGPT has gone down for several hours for maintenance multiple times over the past few months.

This seems to be less of a problem recently, though, as demand has normalized and OpenAI has learned to manage the traffic better, but in the middle of the day, it still makes an appearance from time to time.

Although ChatGPT is a very useful tool, it isn’t free of problems. It’s known for making mistakes or “hallucinations,” where it makes up an answer to something it doesn’t know. A simple example of how unreliable it can sometimes be involved misidentifying the prime minister of Japan .

Beyond just making mistakes, many people are concerned about what this human-like generative AI could mean for the future of the internet, so much so that thousands of tech leaders and prominent public figures have signed a petition to slow down the development. It was even banned in Italy due to privacy concerns, alongside complaints from the FTC — although that’s now been reversed. Since then, the FTC has reopened investigations against OpenAI on questions of personal consumer data is being handled.

Speaking of bans, a number of high-profile companies have been disallowing the use of ChatGPT internally, including Samsung, Amazon, Verizon, and even the United States Congress . Apple is also on the list, though Tim Cook stated that he uses it, just weeks after having it banned .

There’s also the concern that generative AI like ChatGPT could result in the loss of many jobs — as many as 300 million worldwide, according to Goldman Sachs. In particular, it’s taken the spotlight in Hollywood’s writer’s strike , which wants to ensure that AI-written scripts don’t take the jobs of working screenwriters.

Beyond that, multiple controversies have also sprung up around people using ChatGPT to handle tasks that should probably be handled by an actual person. One of the worst cases of this is generating malware, which the FBI recently warned ChatGPT is being used for.

For example, Vanderbilt University’s Peabody School was recently under fire for generating an email about a mass shooting and the importance of community. In addition, JPMorgan Chase is restricting the use of the AI chatbot for workers, especially for generating emails, which companies like Apple have also prohibited internally.

There are also privacy concerns. A recent GDPR complaint says that ChatGPT violates user’s privacy by stealing data from users without their knowledge, and using that data to train the AI model.

Lastly, ChatGPT was even made able to generate Windows 11 keys for free , according to one user. Of course, this is not how ChatGPT was meant to be used, but it’s significant that it was even able to be “tricked” into generating the keys in the first place.

Teachers, school administrators, and developers are already finding different ways around this and banning the use of ChatGPT in schools . Others are more optimistic about how ChatGPT might be used for teaching, but plagiarism is undoubtedly going to continue being an issue in terms of education in the future. There are some ideas about how ChatGPT could “watermark” its text and fix this plagiarism problem, but as of now, detecting ChatGPT is still incredibly difficult to do.

ChatGPT recently launched a new version of its own plagiarism detection tool , with hopes that it will squelch some of the criticism around how people are using the text generation. It uses a new feature called “AI text classifier,” which operates in a way familiar to other plagiarism software. According to OpenAI, however, the tool is still a work in progress and is “imperfect.”

Other tools like GPTZero claim to help detect ChatGPT plagiarism, too. Although they work, some extra editing on AI responses can still trip up these tools.

Well, that’s the fun part. Since its launch, people have been experimenting to discover everything the chatbot can and can’t do — and some of the results have been mind-blowing .

Learning the kinds of prompts and follow-up prompts that ChatGPT responds well to requires some experimentation though. Much like we’ve learned to get the information we want from traditional search engines, it can take some time to get the best results from ChatGPT. If you want to get started, we have a roundup of the best ChatGPT tips .

It really all depends on what you want out of it. To start out, try using it to write a template blog post, for example, or even blocks of code if you’re a programmer.

Our writers experimented with ChatGPT too, attempting to see if it could handle holiday shopping or even properly interpret astrological makeup . In both cases, we found limitations to what it could do while still being thoroughly impressed by the results.

But the fun is in trying it out yourself. Whether you think ChatGPT is an amazing piece of tech or will lead to the destruction of the internet as we know it, it’s worth trying out for yourself to see just what it’s capable of.

Following an update on August 10, you can now use custom instructions with ChatGPT . This allows you to customize how the AI chatbot responds to your inputs so you can tailor it for your needs.

You can’t ask anything, though. OpenAI has safeguards in place in order to “build a safe and beneficial artificial general intelligence.” That means any questions that are hateful, sexist, racist, or discriminatory in any way are generally off-limits.

The announcement of ChatGPT plugins caused a great stir in the developer community, with some calling it “the most powerful developer platform ever created.” AI enthusiasts have compared it to the surge of interest in the iOS App Store when it first launched, greatly expanding the capabilities of the iPhone.

Essentially, developers will be able to build plugins directly for ChatGPT, to open it up to have access to the whole of the internet and connect directly to the APIs of specific applications. It’s ChatGPT out in the real world. Some of the examples provided by OpenAI include applications being able to perform actions on behalf of the user, retrieve real-time information, and access knowledge-based information.

It’s currently only available on a waitlist, but early applications to use plugins with ChatGPT include Expedia, Instacart, Slack, and OpenTable — and now there are lots to explore, including the ones we’ve seen as the best ChatGPT plugins to try out.

Outside of the ChatGPT app itself, many apps had been announced as partners with OpenAI using the ChatGPT API. Of the initial batch, the most prominent example is Snapchat’s MyAI .

Essentially, this is a way for developers to access ChatGPT and plug its natural language capabilities directly into apps and websites. We’ve seen it used in all sorts of different cases, ranging from suggesting parts in Newegg’s PC builder to building out a travel itinerary with just a few words. Recently, OpenAI made the ChatGPT API available to everyone, and we’ve seen a surge in tools leveraging the technology, such as Discord’s Clyde chatbot or Wix’s website builder .

What’s the future of ChatGPT and GPT-5?

There’s no doubt that the tech world has become obsessed with ChatGPT right now, and it’s not slowing down anytime soon. GPT-4, the next iteration of the model, has officially launched, though it’s currently only available for ChatGPT Plus. We do know, however, that Bing Chat is at least partially built on the GPT-4 language model, even if certain elements such as visual input aren’t available.

But the bigger development will be how ChatGPT continues to be integrated into other applications. Microsoft reportedly made a multibillion-dollar investment in ChatGPT , which is already starting to pay off. The first integration was in Teams Premium , with some of OpenAI’s features showing up to automate tasks and provide transcripts. Most prominently, Microsoft revealed 365 Copilot , which integrates ChatGPT natural language prompts directly into Office apps like Word, PowerPoint, Outlook, and more.

There were initial reports that GPT-5 is on the way and could finish training later this year, with some people claiming that it would achieve AGI (artificial general intelligence). That’s a big, controversial statement, but clearly, things are progressing at a rapid pace.

Since then, OpenAI has stated that GPT-5 is not on the timeline and is not currently planned. That being said, the next version, GPT-4.5, is currently training and may be available later this year. OpenAI indicated that it may be done planning as early as September or October.

All that to say, if you think AI is a big deal now, just wait until it’s built into the most common applications that are used for work and school.

ChatGPT remains the most popular AI chatbot at the moment, but it’s not completely without competition. Microsoft’s Bing Chat is the biggest rival, which uses OpenAI’s GPT-4 model as a basis for its answers. Although it requires downloading the Edge browser to use, Bing Chat is free and offers some added features such as different writing modes, image creation, and search links. It even got a significant update recently that introduced features like export, third-party plugins, and multimodal support. There’s also YouChat, which uses GPT-3, an older model from OpenAI, and Forefront AI , which gives you access to GPT-4 and beyond.

The biggest non-GPT competitor to ChatGPT is Google Bard . It’s based on Google’s own homegrown language model, LaMDA, and Google seems intent on competing directly with OpenAI with Google Bard. The most recent updates make it a far more compelling alternative to ChatGPT , even if it’s not quite there.

There are a number of other chatbots out there, some of which are based on Meta’s open-source language model, LLaMA, such as Vicuna and HuggingChat .

Reports suggest Apple has been working on a ChatGPT rival for years , as well, though we haven’t seen it yet. Some reporters say it is “significantly behind competitors” at the moment.

Are ChatGPT chats private?

It depends on what you mean by private. All chats with ChatGPT are used by OpenAI to further tune the models, which can actually involve the use of human trainers. No, that doesn’t mean a human is looking through every question you ask ChatGPT, but there’s a reason OpenAI warns against providing any personal information to ChatGPT.

It should be noted that if you don’t delete your chats, the conversations will appear in the left sidebar. Unlike with other chatbots, individual chats within a conversation cannot be deleted, though they can be edited using the pencil icon that appears when you hover over a chat. When you delete the conversations, however, it’s not that ChatGPT forgets they ever happened — it’s just that they disappear from the sidebar chat history.

Fortunately, OpenAI has recently announced a way to make your chats hidden from the sidebar . These “hidden” chats won’t be used to train AI models either.

When was ChatGPT released?

ChatGPT was originally launched to the public in November of 2022 by OpenAI. The chatbot is based on the GPT-3.5 LLM, which is a fine-tuned version of GPT-3, a model first launched on March 15, 2022. GPT-3 itself, though, has been around for a few years now. It was first released in June 2020, but only as an autoregressive language model.

The predecessors to GPT-3 had very limited public exposure. GPT-2 was announced in February 2019, and the first research paper on GPT was published on OpenAI’s website in 2018.

Will ChatGPT replace Google Search?

Rather than replace it, chatbots are likely to be integrated directly into search. Microsoft has already done this with Bing Chat and Bing, which puts a “chat” tab right into the menu of Bing search.

Even Google has begun experimenting with integrating the smarts of Google Bard into search through its Search Generative Experience . We’re in the early days where all these exist as different products, but it’s not hard to imagine a future where it’s a completely unified experience.

Is Bing Chat the same as ChatGPT?

Microsoft has officially brought ChatGPT to Bing in the form of Bing Chat . After a long beta period, it was officially available to try out. But unlike ChatGPT , Bing Chat does require downloading the latest version of Edge. So Safari or Chrome users are out of luck.

In the early days of its release, Bing Chat was capable of some unhinged responses , but Microsoft has been quick to tame things a bit. It was recently announced that Bing Chat is using the latest GPT-4 language model , meaning it’s more powerful and accurate than ChatGPT . The new Edge Copilot mode also provides a more user-friendly way to get started, offering suggested prompts, links to learn more, and ways to tweak the kinds of answers it gives you. And now with the Windows Copilot , Bing Chat will live right on your desktop.

Is Google Bard the same as ChatGPT?

Unlike Bing Chat, Google Bard  uses an entirely different LLM to power its natural language capabilities. Upon its release, Bard has been using LaMDA, the company’s own model, which stands for Language Model for Dialogue Applications. As has been demonstrated from early on, Bard didn’t have quite the precision in its answers.

Reports indicate, however, that Bard is getting a massive update soon, going from being trained on 30 billion parameters up to 600 billion parameters. That could make it closer to what is possible with GPT-4.

Can you write essays with ChatGPT?

The use of ChatGPT has been full of controversy, with many onlookers considering how the power of AI will change everything from search engines to novel writing. It’s even demonstrated the ability to earn students surprisingly good grades in essay writing.

Essay writing for students is one of the most obvious examples of where ChatGPT could become a problem. ChatGPT might not write this article all that well, but it feels particularly easy to use for essay writing. Some generative AI tools, such as Caktus AI , are built specifically for this purpose.

Can ChatGPT write and debug code?

Absolutely — it’s one of the most powerful features of ChatGPT. As with everything with AI, you’ll want to double-check everything it produces, because it won’t always get your code right. But it’s certainly powerful at both writing code from scratch and debugging code.

Developers have used it to create websites, applications, and games from scratch — all of which are made more powerful with GPT-4, of course. There’s even a plug-in called ChatGPT Code Interpreter that makes programming with AI even more accessible.

What is the ChatGPT character limit?

OpenAI doesn’t set an exact character limit, but it will cut off around its responses at about 500 words or 4,000 characters. If you happen to give the chatbot a request for a specific number of words above 500, you might find that it cuts off mid-sentence somewhere after 500 words.

One way to get around this is just to ask it to “go on” or “continue,” but it depends on the prompt and type of response. Sometimes ChatGPT will more or less repeat the previous answers in different words.

The best way to get access to responses with longer characters is to upgrade to ChatGPT Plus.

Is there a ChatGPT bug bounty program?

Yes. A bug bounty program for ChatGPT was recently announced . The program was unveiled officially on OpenAI’s website , which details the types of “cash awards” that are being offered. They range from $200 to up to $20,000 for what it calls “exceptional discoveries.”

While addressing security researchers interested in getting involved in the program, OpenAI said it recognized “the critical importance of security and view it as a collaborative effort. By sharing your findings, you will play a crucial role in making our technology safer for everyone.”

Do you need to download ChatGPT?

ChatGPT is available via a webpage , so no downloading is needed. However, OpenAI has finally released a free, official iOS app that needs to be downloaded from the iOS app store. For many months, the various app stores were full of fake versions. These are still out there, though, and should be installed and used with caution, as they are not official ChatGPT apps. There is no still no official Android app.

On desktop, there are a couple of ways to install ChatGPT, though. First, you can navigate to the ChatGPT website and save it as a Windows app through Edge. Go to the site, click the ellipsis menu, and hover over Apps.  Select  Install this site as an app  to load ChatGPT from your desktop.

Other tools like MacGPT also allow shortcuts to access the browser service from your desktop.

Can you use ChatGPT on iPhone or Android?

Now that there’s an official iOS app, you no longer have to rely solely on the web app to use ChatGPT on your phone . So, whether with the official app as downloaded through the app store or just the web version, you can certainly use ChatGPT on iPhones. In addition, there’s even a way to replace Siri with ChatGPT on your iPhone, as well as some useful mobile apps like Perplexity AI .

As for Android, you’ll need to rely on the web app. Just as on desktop, type in chat.openai.com to start using ChatGPT.

Can you get ChatGPT to answer any question?

Not exactly. ChatGPT has limitations in the kinds of questions it can answer. First of all, it can’t write about anything that requires internet knowledge after late 2021, which is when its training stopped.

Beyond that, ChatGPT is careful about answering questions that might imply illegal, explicit, or damaging activity. It’ll avoid swearing or political debates, and will (usually) avoid making malware. There is some amount of jailbreaking that can be done to get around these restrictions, but OpenAI is constantly tightening its content policies to restrict unwanted answers. One example of a common jailbreaking technique is the DAN (Do Anything Now) prompt , though OpenAI has worked hard to plug these holes over time.

What is Auto-GPT?

Built on GPT-4, Auto-GPT is the latest evolution of AI technology to cause a stir in the industry. It’s not directly related to ChatGPT or OpenAI — instead, it’s an open-source Python application that got into the hands of developers all over the internet when it was published on GitHub .

With ChatGPT or ChatGPT Plus, the capabilities of the AI are limited to a single chat window. Auto-GPT, at its simplest, is making AI autonomous. It can be given a set of goals, and then take the necessary steps towards accomplishing that goal across the internet, including connecting up with applications and software.

According to the official description on GitHub, Auto-GPT is an “experimental open-source application showcasing the capabilities of the GPT-4 language model. This program, driven by GPT-4, chains together LLM ‘thoughts’, to autonomously achieve whatever goal you set. As one of the first examples of GPT-4 running fully autonomously, Auto-GPT pushes the boundaries of what is possible with AI.”

The demo used on the GitHub page is simple — just create a recipe appropriate for Easter and save it to a file. What’s neat is how Auto-GPT breaks down the steps the AI is taking to accomplish the goal, including the “thoughts” and “reasoning” behind its actions. Auto-GPT is already being used in a variety of different applications, with some touting it as the beginning of AGI (Artificial General Intelligence) due to its autonomous nature.

What is GPT-4 and how does it compare to GPT-3.5?

GPT-4 is a more advanced LLM — the most powerful that OpenAI currently offers. At the current moment, OpenAI only offers access to GPT-4 through ChatGPT Plus. GPT-3.5, on the other hand, is the LLM that powers the free ChatGPT tool. OpenAI no longer says exactly how many parameters these advanced models are trained on, but it’s rumored that GPT-4 boasts up to 1 trillion parameters.

Regardless, the results are a fairly dramatic difference between GPT-3.5 and GPT-4 in terms of quality. It offers much more precise answers, is significantly better at coding and creative collaboration, and can provide (and respond to) much longer selections of text. GPT-4 remains the best possible model available, while GPT-3.5 is more in line with some other models available.

Who owns the copyright to content created by ChatGPT?

This is a question open to debate. Much of the conversation around copyright and AI is ongoing, with some saying generative AI is “stealing” the work of the content it was trained on. This has become increasingly contentious in the world of AI art. Companies like Adobe are finding ways around this by only training models on stock image libraries that already have proper artist credit and legal boundaries.

According to OpenAI, however, you have the right to reprint, sell, and merchandise anything that was created with ChatGPT or ChatGPT Plus. So, you’re not going to get sued by OpenAI.

The larger topic of copyright law regarding generative AI is still to be determined by various lawmakers and interpreters of the law, especially since copyright law as it currently stands technically only protects content created by human beings.

Editors' Recommendations

  • DuckDuckGo’s new AI service keeps your chatbot conversations private
  • Few people are using ChatGPT and other AI tools regularly, study suggests
  • ChatGPT not working? The most common problems and fixes
  • Macs just got a huge AI boost
  • ChatGPT can laugh now, and it’s downright creepy

Fionna Agomuoh

ChatGPT is an amazing tool, and when they were introduced, plug-ins made it even better. But as of March 2024, they're no longer available as part of ChatGPT, having since been replaced by Custom GPTs, which you can make yourself. Or you can use one of the many amazing options from other developers, AI fans, and prompt engineers.

Interested in learning about how to make the best custom GPT for you? We have a guide for that. If you're more interested in the best custom GPTs available now, we have a guide for that too.

OpenAI needs to watch out because Apple may finally be jumping on the AI bandwagon, and the news doesn't bode well for ChatGPT. Apple is reportedly working on a large language model (LLM) referred to as ReALM, which stands for Reference Resolution As Language Modeling. Made to give Siri a boost and help it understand context, the model comes in four variants, and Apple claims that even its smallest model performs on a similar level to OpenAI's ChatGPT.

This tantalizing bit of information comes from an Apple research paper, first shared by Windows Central, and it appears to be an early peek into what Apple has been cooking for a while now. ReALM is Apple's own LLM that was reportedly made to enhance Siri's capabilities; these improvements include a greater ability to understand context in a conversation.

The ChatGPT chatbot is an innovative AI tool developed by OpenAI. As it stands, there are two main versions of the software: GPT-4 and GPT-3.5. Toe to toe in more ways than one, there are a couple of key differences between both versions that may be deal-breakers for certain users. But what exactly are these differences? We’re here to help you find out. 

We’ve put together this side-by-side comparison of both ChatGPT versions, so when you’re done reading, you’ll know what version makes the most sense for you and yours. What are GPT 3.5 and GPT-4?

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

Publications

  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Internet & Technology

6 facts about americans and tiktok.

62% of U.S. adults under 30 say they use TikTok, compared with 39% of those ages 30 to 49, 24% of those 50 to 64, and 10% of those 65 and older.

Many Americans think generative AI programs should credit the sources they rely on

Americans’ use of chatgpt is ticking up, but few trust its election information, whatsapp and facebook dominate the social media landscape in middle-income nations, sign up for our internet, science, and tech newsletter.

New findings, delivered monthly

Electric Vehicle Charging Infrastructure in the U.S.

64% of Americans live within 2 miles of a public electric vehicle charging station, and those who live closest to chargers view EVs more positively.

When Online Content Disappears

A quarter of all webpages that existed at one point between 2013 and 2023 are no longer accessible.

A quarter of U.S. teachers say AI tools do more harm than good in K-12 education

High school teachers are more likely than elementary and middle school teachers to hold negative views about AI tools in education.

Teens and Video Games Today

85% of U.S. teens say they play video games. They see both positive and negative sides, from making friends to harassment and sleep loss.

Americans’ Views of Technology Companies

Most Americans are wary of social media’s role in politics and its overall impact on the country, and these concerns are ticking up among Democrats. Still, Republicans stand out on several measures, with a majority believing major technology companies are biased toward liberals.

22% of Americans say they interact with artificial intelligence almost constantly or several times a day. 27% say they do this about once a day or several times a week.

About one-in-five U.S. adults have used ChatGPT to learn something new (17%) or for entertainment (17%).

Across eight countries surveyed in Latin America, Africa and South Asia, a median of 73% of adults say they use WhatsApp and 62% say they use Facebook.

5 facts about Americans and sports

About half of Americans (48%) say they took part in organized, competitive sports in high school or college.

REFINE YOUR SELECTION

Research teams, signature reports.

common app essay prompts for 2021

The State of Online Harassment

Roughly four-in-ten Americans have experienced online harassment, with half of this group citing politics as the reason they think they were targeted. Growing shares face more severe online abuse such as sexual harassment or stalking

Parenting Children in the Age of Screens

Two-thirds of parents in the U.S. say parenting is harder today than it was 20 years ago, with many citing technologies – like social media or smartphones – as a reason.

Dating and Relationships in the Digital Age

From distractions to jealousy, how Americans navigate cellphones and social media in their romantic relationships.

Americans and Privacy: Concerned, Confused and Feeling Lack of Control Over Their Personal Information

Majorities of U.S. adults believe their personal data is less secure now, that data collection poses more risks than benefits, and that it is not possible to go through daily life without being tracked.

Americans and ‘Cancel Culture’: Where Some See Calls for Accountability, Others See Censorship, Punishment

Social media fact sheet, digital knowledge quiz, video: how do americans define online harassment.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

Want To Get Into The Ivy League? Here’s How Long The Application Process Really Takes

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

One of the main gates on the Brown University campus, decorated with the University crest. (Photo by ... [+] Rick Friedman/Corbis via Getty Images)

While the college admissions process begins in earnest during a student’s junior year of high school, a standout college admissions profile is the result of years of strategic and intentional planning. This is especially true for students with Ivy League dreams—joining the ranks of students at Yale, Princeton, and Harvard requires time, dedication, and consideration long before students start their applications. Even the most talented, qualified students underestimate the amount of time that goes into planning for and completing the application process. Starting early and planning ahead are crucial for crafting stand-out Ivy League applications.

Here’s a detailed breakdown of how much time you should realistically expect to invest in the Ivy League admissions process, from start to finish:

Developing Your Hook: 4 Years

A “ hook ” is the element of a student’s profile that “hooks” the attention of admissions officers—it is the X factor that distinguishes a student from thousands of other applicants. It should be the anchoring interest around which all other elements of an application coalesce. Developing this defining passion requires time and dedication, so the earlier a student starts intentionally exploring their interests to develop this hook, the better. Beginning in freshman year, students should explore activities, courses, and volunteer opportunities in their schools and communities, thoughtfully weighing what they most enjoy as they do so. Over the next few years, students should hone their hook through continued involvement in extracurricular or volunteer opportunities that align with their guiding interests, seeking leadership opportunities when applicable.

Building an Independent Project: 2 years

One of the most effective ways to showcase a hook is through an independent passion project. Sophomore, junior or fall of senior year, students should craft an initiative that uses their passions to better their communities, as this will demonstrate self-motivation, genuine passion, and leadership acumen to Ivy League and other top colleges. Their project could take the shape of scientific research, a nonprofit, a community initiative, or a startup business. Students should spend a few months brainstorming, planning, and setting clear goals before entering the implementation stage. They should be sure to document their progress meticulously as they overcome hurdles and meet their goals, as this will enable them to relay their successes clearly and specifically on their applications in the future.

Researching Colleges & Structuring College List: 6 months–1 year

During their junior year, students should consult a variety of resources and rankings and begin to develop their college lists. As they do so, they should keep in mind that every ranking system takes unique factors into account—for instance, while U.S. News and World Report focuses on metrics related to academic quality such as academic reputation and graduation rates, Forbes is heavily focused on financial metrics , considering ROI, average debt, and alumni salary. In addition to weighing schools’ rankings, students should also seek to balance their college lists by comparing their academic standing with the academic profile of admitted students. If a student’s GPA and test scores fall within the middle 50% of admitted students, the school is a match; if they are above the 75th percentile, that school is likely a safety, and if their scores are below the 25th percentile, the school is a reach.

Apple iPhone 16 Pro To Boast Record Breaking Design Leak Claims

Samsung issues critical update for millions of galaxy users, biden vs trump 2024 election polls trump loses support after conviction latest survey shows.

Studying & Taking Standardized Tests: 6 months–1.5 years

Typically, students will have completed the mathematics coursework needed to take the SAT and ACT by the spring of their sophomore year and should sit for diagnostic ACT and SAT tests around that time. Once they receive their diagnostic scores, students should create a study plan that will enable them to reach their goal score, which should be set relative to their college aspirations; students with Ivy League dreams should aim to earn a 34+ on the ACT or a 1550+ on the SAT. The amount of time needed to prepare for and ace standardized tests often varies greatly depending on students’ diagnostic scores, goal scores, and how much time and effort they devote to studying.

Writing Essays & Assembling Applications: 6 months

Finally, completing the actual application is perhaps the shortest stage of the process—though it is the most important. Students who have dedicated time and effort to building their applicant profiles throughout their high school careers will reap the benefits of their long term planning; they will be able to approach the process with a clear understanding of the unique story they wish to convey through their application components. Students should kickstart the process in the spring of their junior year by requesting recommendations from their teachers, school counselors, and other non-academic mentors. The summer before senior year is a critical time to work on the personal statement, which tends to be one of the most time consuming elements of the application process as it requires lengthy brainstorming, drafting, and editing. Supplemental essay prompts for specific schools are generally released in August, so students should plan to devote the remainder of their summer and fall to completing those essays. Finally, with focus and dedication, students can complete the activities list in one to two weeks, but they should devote concerted attention to the activities list like all the other elements of their application and be sure not to save it until the last minute.

While every student is different and will need to assemble their own timeline, the college admissions process is a demanding one—particularly for students determined to gain admission to the most elite universities in the country. Students should begin preparing early in order to give themselves some leeway and submit applications that they are truly proud of.

Christopher Rim

  • Editorial Standards
  • Reprints & Permissions

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 03 June 2024

Applying large language models for automated essay scoring for non-native Japanese

  • Wenchao Li 1 &
  • Haitao Liu 2  

Humanities and Social Sciences Communications volume  11 , Article number:  723 ( 2024 ) Cite this article

185 Accesses

2 Altmetric

Metrics details

  • Language and linguistics

Recent advancements in artificial intelligence (AI) have led to an increased use of large language models (LLMs) for language assessment tasks such as automated essay scoring (AES), automated listening tests, and automated oral proficiency assessments. The application of LLMs for AES in the context of non-native Japanese, however, remains limited. This study explores the potential of LLM-based AES by comparing the efficiency of different models, i.e. two conventional machine training technology-based methods (Jess and JWriter), two LLMs (GPT and BERT), and one Japanese local LLM (Open-Calm large model). To conduct the evaluation, a dataset consisting of 1400 story-writing scripts authored by learners with 12 different first languages was used. Statistical analysis revealed that GPT-4 outperforms Jess and JWriter, BERT, and the Japanese language-specific trained Open-Calm large model in terms of annotation accuracy and predicting learning levels. Furthermore, by comparing 18 different models that utilize various prompts, the study emphasized the significance of prompts in achieving accurate and reliable evaluations using LLMs.

Similar content being viewed by others

common app essay prompts for 2021

Accurate structure prediction of biomolecular interactions with AlphaFold 3

common app essay prompts for 2021

Testing theory of mind in large language models and humans

common app essay prompts for 2021

Highly accurate protein structure prediction with AlphaFold

Conventional machine learning technology in aes.

AES has experienced significant growth with the advancement of machine learning technologies in recent decades. In the earlier stages of AES development, conventional machine learning-based approaches were commonly used. These approaches involved the following procedures: a) feeding the machine with a dataset. In this step, a dataset of essays is provided to the machine learning system. The dataset serves as the basis for training the model and establishing patterns and correlations between linguistic features and human ratings. b) the machine learning model is trained using linguistic features that best represent human ratings and can effectively discriminate learners’ writing proficiency. These features include lexical richness (Lu, 2012 ; Kyle and Crossley, 2015 ; Kyle et al. 2021 ), syntactic complexity (Lu, 2010 ; Liu, 2008 ), text cohesion (Crossley and McNamara, 2016 ), and among others. Conventional machine learning approaches in AES require human intervention, such as manual correction and annotation of essays. This human involvement was necessary to create a labeled dataset for training the model. Several AES systems have been developed using conventional machine learning technologies. These include the Intelligent Essay Assessor (Landauer et al. 2003 ), the e-rater engine by Educational Testing Service (Attali and Burstein, 2006 ; Burstein, 2003 ), MyAccess with the InterlliMetric scoring engine by Vantage Learning (Elliot, 2003 ), and the Bayesian Essay Test Scoring system (Rudner and Liang, 2002 ). These systems have played a significant role in automating the essay scoring process and providing quick and consistent feedback to learners. However, as touched upon earlier, conventional machine learning approaches rely on predetermined linguistic features and often require manual intervention, making them less flexible and potentially limiting their generalizability to different contexts.

In the context of the Japanese language, conventional machine learning-incorporated AES tools include Jess (Ishioka and Kameda, 2006 ) and JWriter (Lee and Hasebe, 2017 ). Jess assesses essays by deducting points from the perfect score, utilizing the Mainichi Daily News newspaper as a database. The evaluation criteria employed by Jess encompass various aspects, such as rhetorical elements (e.g., reading comprehension, vocabulary diversity, percentage of complex words, and percentage of passive sentences), organizational structures (e.g., forward and reverse connection structures), and content analysis (e.g., latent semantic indexing). JWriter employs linear regression analysis to assign weights to various measurement indices, such as average sentence length and total number of characters. These weights are then combined to derive the overall score. A pilot study involving the Jess model was conducted on 1320 essays at different proficiency levels, including primary, intermediate, and advanced. However, the results indicated that the Jess model failed to significantly distinguish between these essay levels. Out of the 16 measures used, four measures, namely median sentence length, median clause length, median number of phrases, and maximum number of phrases, did not show statistically significant differences between the levels. Additionally, two measures exhibited between-level differences but lacked linear progression: the number of attributives declined words and the Kanji/kana ratio. On the other hand, the remaining measures, including maximum sentence length, maximum clause length, number of attributive conjugated words, maximum number of consecutive infinitive forms, maximum number of conjunctive-particle clauses, k characteristic value, percentage of big words, and percentage of passive sentences, demonstrated statistically significant between-level differences and displayed linear progression.

Both Jess and JWriter exhibit notable limitations, including the manual selection of feature parameters and weights, which can introduce biases into the scoring process. The reliance on human annotators to label non-native language essays also introduces potential noise and variability in the scoring. Furthermore, an important concern is the possibility of system manipulation and cheating by learners who are aware of the regression equation utilized by the models (Hirao et al. 2020 ). These limitations emphasize the need for further advancements in AES systems to address these challenges.

Deep learning technology in AES

Deep learning has emerged as one of the approaches for improving the accuracy and effectiveness of AES. Deep learning-based AES methods utilize artificial neural networks that mimic the human brain’s functioning through layered algorithms and computational units. Unlike conventional machine learning, deep learning autonomously learns from the environment and past errors without human intervention. This enables deep learning models to establish nonlinear correlations, resulting in higher accuracy. Recent advancements in deep learning have led to the development of transformers, which are particularly effective in learning text representations. Noteworthy examples include bidirectional encoder representations from transformers (BERT) (Devlin et al. 2019 ) and the generative pretrained transformer (GPT) (OpenAI).

BERT is a linguistic representation model that utilizes a transformer architecture and is trained on two tasks: masked linguistic modeling and next-sentence prediction (Hirao et al. 2020 ; Vaswani et al. 2017 ). In the context of AES, BERT follows specific procedures, as illustrated in Fig. 1 : (a) the tokenized prompts and essays are taken as input; (b) special tokens, such as [CLS] and [SEP], are added to mark the beginning and separation of prompts and essays; (c) the transformer encoder processes the prompt and essay sequences, resulting in hidden layer sequences; (d) the hidden layers corresponding to the [CLS] tokens (T[CLS]) represent distributed representations of the prompts and essays; and (e) a multilayer perceptron uses these distributed representations as input to obtain the final score (Hirao et al. 2020 ).

figure 1

AES system with BERT (Hirao et al. 2020 ).

The training of BERT using a substantial amount of sentence data through the Masked Language Model (MLM) allows it to capture contextual information within the hidden layers. Consequently, BERT is expected to be capable of identifying artificial essays as invalid and assigning them lower scores (Mizumoto and Eguchi, 2023 ). In the context of AES for nonnative Japanese learners, Hirao et al. ( 2020 ) combined the long short-term memory (LSTM) model proposed by Hochreiter and Schmidhuber ( 1997 ) with BERT to develop a tailored automated Essay Scoring System. The findings of their study revealed that the BERT model outperformed both the conventional machine learning approach utilizing character-type features such as “kanji” and “hiragana”, as well as the standalone LSTM model. Takeuchi et al. ( 2021 ) presented an approach to Japanese AES that eliminates the requirement for pre-scored essays by relying solely on reference texts or a model answer for the essay task. They investigated multiple similarity evaluation methods, including frequency of morphemes, idf values calculated on Wikipedia, LSI, LDA, word-embedding vectors, and document vectors produced by BERT. The experimental findings revealed that the method utilizing the frequency of morphemes with idf values exhibited the strongest correlation with human-annotated scores across different essay tasks. The utilization of BERT in AES encounters several limitations. Firstly, essays often exceed the model’s maximum length limit. Second, only score labels are available for training, which restricts access to additional information.

Mizumoto and Eguchi ( 2023 ) were pioneers in employing the GPT model for AES in non-native English writing. Their study focused on evaluating the accuracy and reliability of AES using the GPT-3 text-davinci-003 model, analyzing a dataset of 12,100 essays from the corpus of nonnative written English (TOEFL11). The findings indicated that AES utilizing the GPT-3 model exhibited a certain degree of accuracy and reliability. They suggest that GPT-3-based AES systems hold the potential to provide support for human ratings. However, applying GPT model to AES presents a unique natural language processing (NLP) task that involves considerations such as nonnative language proficiency, the influence of the learner’s first language on the output in the target language, and identifying linguistic features that best indicate writing quality in a specific language. These linguistic features may differ morphologically or syntactically from those present in the learners’ first language, as observed in (1)–(3).

我-送了-他-一本-书

Wǒ-sòngle-tā-yī běn-shū

1 sg .-give. past- him-one .cl- book

“I gave him a book.”

Agglutinative

彼-に-本-を-あげ-まし-た

Kare-ni-hon-o-age-mashi-ta

3 sg .- dat -hon- acc- give.honorification. past

Inflectional

give, give-s, gave, given, giving

Additionally, the morphological agglutination and subject-object-verb (SOV) order in Japanese, along with its idiomatic expressions, pose additional challenges for applying language models in AES tasks (4).

足-が 棒-に なり-ました

Ashi-ga bo-ni nar-mashita

leg- nom stick- dat become- past

“My leg became like a stick (I am extremely tired).”

The example sentence provided demonstrates the morpho-syntactic structure of Japanese and the presence of an idiomatic expression. In this sentence, the verb “なる” (naru), meaning “to become”, appears at the end of the sentence. The verb stem “なり” (nari) is attached with morphemes indicating honorification (“ます” - mashu) and tense (“た” - ta), showcasing agglutination. While the sentence can be literally translated as “my leg became like a stick”, it carries an idiomatic interpretation that implies “I am extremely tired”.

To overcome this issue, CyberAgent Inc. ( 2023 ) has developed the Open-Calm series of language models specifically designed for Japanese. Open-Calm consists of pre-trained models available in various sizes, such as Small, Medium, Large, and 7b. Figure 2 depicts the fundamental structure of the Open-Calm model. A key feature of this architecture is the incorporation of the Lora Adapter and GPT-NeoX frameworks, which can enhance its language processing capabilities.

figure 2

GPT-NeoX Model Architecture (Okgetheng and Takeuchi 2024 ).

In a recent study conducted by Okgetheng and Takeuchi ( 2024 ), they assessed the efficacy of Open-Calm language models in grading Japanese essays. The research utilized a dataset of approximately 300 essays, which were annotated by native Japanese educators. The findings of the study demonstrate the considerable potential of Open-Calm language models in automated Japanese essay scoring. Specifically, among the Open-Calm family, the Open-Calm Large model (referred to as OCLL) exhibited the highest performance. However, it is important to note that, as of the current date, the Open-Calm Large model does not offer public access to its server. Consequently, users are required to independently deploy and operate the environment for OCLL. In order to utilize OCLL, users must have a PC equipped with an NVIDIA GeForce RTX 3060 (8 or 12 GB VRAM).

In summary, while the potential of LLMs in automated scoring of nonnative Japanese essays has been demonstrated in two studies—BERT-driven AES (Hirao et al. 2020 ) and OCLL-based AES (Okgetheng and Takeuchi, 2024 )—the number of research efforts in this area remains limited.

Another significant challenge in applying LLMs to AES lies in prompt engineering and ensuring its reliability and effectiveness (Brown et al. 2020 ; Rae et al. 2021 ; Zhang et al. 2021 ). Various prompting strategies have been proposed, such as the zero-shot chain of thought (CoT) approach (Kojima et al. 2022 ), which involves manually crafting diverse and effective examples. However, manual efforts can lead to mistakes. To address this, Zhang et al. ( 2021 ) introduced an automatic CoT prompting method called Auto-CoT, which demonstrates matching or superior performance compared to the CoT paradigm. Another prompt framework is trees of thoughts, enabling a model to self-evaluate its progress at intermediate stages of problem-solving through deliberate reasoning (Yao et al. 2023 ).

Beyond linguistic studies, there has been a noticeable increase in the number of foreign workers in Japan and Japanese learners worldwide (Ministry of Health, Labor, and Welfare of Japan, 2022 ; Japan Foundation, 2021 ). However, existing assessment methods, such as the Japanese Language Proficiency Test (JLPT), J-CAT, and TTBJ Footnote 1 , primarily focus on reading, listening, vocabulary, and grammar skills, neglecting the evaluation of writing proficiency. As the number of workers and language learners continues to grow, there is a rising demand for an efficient AES system that can reduce costs and time for raters and be utilized for employment, examinations, and self-study purposes.

This study aims to explore the potential of LLM-based AES by comparing the effectiveness of five models: two LLMs (GPT Footnote 2 and BERT), one Japanese local LLM (OCLL), and two conventional machine learning-based methods (linguistic feature-based scoring tools - Jess and JWriter).

The research questions addressed in this study are as follows:

To what extent do the LLM-driven AES and linguistic feature-based AES, when used as automated tools to support human rating, accurately reflect test takers’ actual performance?

What influence does the prompt have on the accuracy and performance of LLM-based AES methods?

The subsequent sections of the manuscript cover the methodology, including the assessment measures for nonnative Japanese writing proficiency, criteria for prompts, and the dataset. The evaluation section focuses on the analysis of annotations and rating scores generated by LLM-driven and linguistic feature-based AES methods.

Methodology

The dataset utilized in this study was obtained from the International Corpus of Japanese as a Second Language (I-JAS) Footnote 3 . This corpus consisted of 1000 participants who represented 12 different first languages. For the study, the participants were given a story-writing task on a personal computer. They were required to write two stories based on the 4-panel illustrations titled “Picnic” and “The key” (see Appendix A). Background information for the participants was provided by the corpus, including their Japanese language proficiency levels assessed through two online tests: J-CAT and SPOT. These tests evaluated their reading, listening, vocabulary, and grammar abilities. The learners’ proficiency levels were categorized into six levels aligned with the Common European Framework of Reference for Languages (CEFR) and the Reference Framework for Japanese Language Education (RFJLE): A1, A2, B1, B2, C1, and C2. According to Lee et al. ( 2015 ), there is a high level of agreement (r = 0.86) between the J-CAT and SPOT assessments, indicating that the proficiency certifications provided by J-CAT are consistent with those of SPOT. However, it is important to note that the scores of J-CAT and SPOT do not have a one-to-one correspondence. In this study, the J-CAT scores were used as a benchmark to differentiate learners of different proficiency levels. A total of 1400 essays were utilized, representing the beginner (aligned with A1), A2, B1, B2, C1, and C2 levels based on the J-CAT scores. Table 1 provides information about the learners’ proficiency levels and their corresponding J-CAT and SPOT scores.

A dataset comprising a total of 1400 essays from the story writing tasks was collected. Among these, 714 essays were utilized to evaluate the reliability of the LLM-based AES method, while the remaining 686 essays were designated as development data to assess the LLM-based AES’s capability to distinguish participants with varying proficiency levels. The GPT 4 API was used in this study. A detailed explanation of the prompt-assessment criteria is provided in Section Prompt . All essays were sent to the model for measurement and scoring.

Measures of writing proficiency for nonnative Japanese

Japanese exhibits a morphologically agglutinative structure where morphemes are attached to the word stem to convey grammatical functions such as tense, aspect, voice, and honorifics, e.g. (5).

食べ-させ-られ-まし-た-か

tabe-sase-rare-mashi-ta-ka

[eat (stem)-causative-passive voice-honorification-tense. past-question marker]

Japanese employs nine case particles to indicate grammatical functions: the nominative case particle が (ga), the accusative case particle を (o), the genitive case particle の (no), the dative case particle に (ni), the locative/instrumental case particle で (de), the ablative case particle から (kara), the directional case particle へ (e), and the comitative case particle と (to). The agglutinative nature of the language, combined with the case particle system, provides an efficient means of distinguishing between active and passive voice, either through morphemes or case particles, e.g. 食べる taberu “eat concusive . ” (active voice); 食べられる taberareru “eat concusive . ” (passive voice). In the active voice, “パン を 食べる” (pan o taberu) translates to “to eat bread”. On the other hand, in the passive voice, it becomes “パン が 食べられた” (pan ga taberareta), which means “(the) bread was eaten”. Additionally, it is important to note that different conjugations of the same lemma are considered as one type in order to ensure a comprehensive assessment of the language features. For example, e.g., 食べる taberu “eat concusive . ”; 食べている tabeteiru “eat progress .”; 食べた tabeta “eat past . ” as one type.

To incorporate these features, previous research (Suzuki, 1999 ; Watanabe et al. 1988 ; Ishioka, 2001 ; Ishioka and Kameda, 2006 ; Hirao et al. 2020 ) has identified complexity, fluency, and accuracy as crucial factors for evaluating writing quality. These criteria are assessed through various aspects, including lexical richness (lexical density, diversity, and sophistication), syntactic complexity, and cohesion (Kyle et al. 2021 ; Mizumoto and Eguchi, 2023 ; Ure, 1971 ; Halliday, 1985 ; Barkaoui and Hadidi, 2020 ; Zenker and Kyle, 2021 ; Kim et al. 2018 ; Lu, 2017 ; Ortega, 2015 ). Therefore, this study proposes five scoring categories: lexical richness, syntactic complexity, cohesion, content elaboration, and grammatical accuracy. A total of 16 measures were employed to capture these categories. The calculation process and specific details of these measures can be found in Table 2 .

T-unit, first introduced by Hunt ( 1966 ), is a measure used for evaluating speech and composition. It serves as an indicator of syntactic development and represents the shortest units into which a piece of discourse can be divided without leaving any sentence fragments. In the context of Japanese language assessment, Sakoda and Hosoi ( 2020 ) utilized T-unit as the basic unit to assess the accuracy and complexity of Japanese learners’ speaking and storytelling. The calculation of T-units in Japanese follows the following principles:

A single main clause constitutes 1 T-unit, regardless of the presence or absence of dependent clauses, e.g. (6).

ケンとマリはピクニックに行きました (main clause): 1 T-unit.

If a sentence contains a main clause along with subclauses, each subclause is considered part of the same T-unit, e.g. (7).

天気が良かった の で (subclause)、ケンとマリはピクニックに行きました (main clause): 1 T-unit.

In the case of coordinate clauses, where multiple clauses are connected, each coordinated clause is counted separately. Thus, a sentence with coordinate clauses may have 2 T-units or more, e.g. (8).

ケンは地図で場所を探して (coordinate clause)、マリはサンドイッチを作りました (coordinate clause): 2 T-units.

Lexical diversity refers to the range of words used within a text (Engber, 1995 ; Kyle et al. 2021 ) and is considered a useful measure of the breadth of vocabulary in L n production (Jarvis, 2013a , 2013b ).

The type/token ratio (TTR) is widely recognized as a straightforward measure for calculating lexical diversity and has been employed in numerous studies. These studies have demonstrated a strong correlation between TTR and other methods of measuring lexical diversity (e.g., Bentz et al. 2016 ; Čech and Miroslav, 2018 ; Çöltekin and Taraka, 2018 ). TTR is computed by considering both the number of unique words (types) and the total number of words (tokens) in a given text. Given that the length of learners’ writing texts can vary, this study employs the moving average type-token ratio (MATTR) to mitigate the influence of text length. MATTR is calculated using a 50-word moving window. Initially, a TTR is determined for words 1–50 in an essay, followed by words 2–51, 3–52, and so on until the end of the essay is reached (Díez-Ortega and Kyle, 2023 ). The final MATTR scores were obtained by averaging the TTR scores for all 50-word windows. The following formula was employed to derive MATTR:

\({\rm{MATTR}}({\rm{W}})=\frac{{\sum }_{{\rm{i}}=1}^{{\rm{N}}-{\rm{W}}+1}{{\rm{F}}}_{{\rm{i}}}}{{\rm{W}}({\rm{N}}-{\rm{W}}+1)}\)

Here, N refers to the number of tokens in the corpus. W is the randomly selected token size (W < N). \({F}_{i}\) is the number of types in each window. The \({\rm{MATTR}}({\rm{W}})\) is the mean of a series of type-token ratios (TTRs) based on the word form for all windows. It is expected that individuals with higher language proficiency will produce texts with greater lexical diversity, as indicated by higher MATTR scores.

Lexical density was captured by the ratio of the number of lexical words to the total number of words (Lu, 2012 ). Lexical sophistication refers to the utilization of advanced vocabulary, often evaluated through word frequency indices (Crossley et al. 2013 ; Haberman, 2008 ; Kyle and Crossley, 2015 ; Laufer and Nation, 1995 ; Lu, 2012 ; Read, 2000 ). In line of writing, lexical sophistication can be interpreted as vocabulary breadth, which entails the appropriate usage of vocabulary items across various lexicon-grammatical contexts and registers (Garner et al. 2019 ; Kim et al. 2018 ; Kyle et al. 2018 ). In Japanese specifically, words are considered lexically sophisticated if they are not included in the “Japanese Education Vocabulary List Ver 1.0”. Footnote 4 Consequently, lexical sophistication was calculated by determining the number of sophisticated word types relative to the total number of words per essay. Furthermore, it has been suggested that, in Japanese writing, sentences should ideally have a length of no more than 40 to 50 characters, as this promotes readability. Therefore, the median and maximum sentence length can be considered as useful indices for assessment (Ishioka and Kameda, 2006 ).

Syntactic complexity was assessed based on several measures, including the mean length of clauses, verb phrases per T-unit, clauses per T-unit, dependent clauses per T-unit, complex nominals per clause, adverbial clauses per clause, coordinate phrases per clause, and mean dependency distance (MDD). The MDD reflects the distance between the governor and dependent positions in a sentence. A larger dependency distance indicates a higher cognitive load and greater complexity in syntactic processing (Liu, 2008 ; Liu et al. 2017 ). The MDD has been established as an efficient metric for measuring syntactic complexity (Jiang, Quyang, and Liu, 2019 ; Li and Yan, 2021 ). To calculate the MDD, the position numbers of the governor and dependent are subtracted, assuming that words in a sentence are assigned in a linear order, such as W1 … Wi … Wn. In any dependency relationship between words Wa and Wb, Wa is the governor and Wb is the dependent. The MDD of the entire sentence was obtained by taking the absolute value of governor – dependent:

MDD = \(\frac{1}{n}{\sum }_{i=1}^{n}|{\rm{D}}{{\rm{D}}}_{i}|\)

In this formula, \(n\) represents the number of words in the sentence, and \({DD}i\) is the dependency distance of the \({i}^{{th}}\) dependency relationship of a sentence. Building on this, the annotation of sentence ‘Mary-ga-John-ni-keshigomu-o-watashita was [Mary- top -John- dat -eraser- acc -give- past] ’. The sentence’s MDD would be 2. Table 3 provides the CSV file as a prompt for GPT 4.

Cohesion (semantic similarity) and content elaboration aim to capture the ideas presented in test taker’s essays. Cohesion was assessed using three measures: Synonym overlap/paragraph (topic), Synonym overlap/paragraph (keywords), and word2vec cosine similarity. Content elaboration and development were measured as the number of metadiscourse markers (type)/number of words. To capture content closely, this study proposed a novel-distance based representation, by encoding the cosine distance between the essay (by learner) and essay task’s (topic and keyword) i -vectors. The learner’s essay is decoded into a word sequence, and aligned to the essay task’ topic and keyword for log-likelihood measurement. The cosine distance reveals the content elaboration score in the leaners’ essay. The mathematical equation of cosine similarity between target-reference vectors is shown in (11), assuming there are i essays and ( L i , …. L n ) and ( N i , …. N n ) are the vectors representing the learner and task’s topic and keyword respectively. The content elaboration distance between L i and N i was calculated as follows:

\(\cos \left(\theta \right)=\frac{{\rm{L}}\,\cdot\, {\rm{N}}}{\left|{\rm{L}}\right|{\rm{|N|}}}=\frac{\mathop{\sum }\nolimits_{i=1}^{n}{L}_{i}{N}_{i}}{\sqrt{\mathop{\sum }\nolimits_{i=1}^{n}{L}_{i}^{2}}\sqrt{\mathop{\sum }\nolimits_{i=1}^{n}{N}_{i}^{2}}}\)

A high similarity value indicates a low difference between the two recognition outcomes, which in turn suggests a high level of proficiency in content elaboration.

To evaluate the effectiveness of the proposed measures in distinguishing different proficiency levels among nonnative Japanese speakers’ writing, we conducted a multi-faceted Rasch measurement analysis (Linacre, 1994 ). This approach applies measurement models to thoroughly analyze various factors that can influence test outcomes, including test takers’ proficiency, item difficulty, and rater severity, among others. The underlying principles and functionality of multi-faceted Rasch measurement are illustrated in (12).

\(\log \left(\frac{{P}_{{nijk}}}{{P}_{{nij}(k-1)}}\right)={B}_{n}-{D}_{i}-{C}_{j}-{F}_{k}\)

(12) defines the logarithmic transformation of the probability ratio ( P nijk /P nij(k-1) )) as a function of multiple parameters. Here, n represents the test taker, i denotes a writing proficiency measure, j corresponds to the human rater, and k represents the proficiency score. The parameter B n signifies the proficiency level of test taker n (where n ranges from 1 to N). D j represents the difficulty parameter of test item i (where i ranges from 1 to L), while C j represents the severity of rater j (where j ranges from 1 to J). Additionally, F k represents the step difficulty for a test taker to move from score ‘k-1’ to k . P nijk refers to the probability of rater j assigning score k to test taker n for test item i . P nij(k-1) represents the likelihood of test taker n being assigned score ‘k-1’ by rater j for test item i . Each facet within the test is treated as an independent parameter and estimated within the same reference framework. To evaluate the consistency of scores obtained through both human and computer analysis, we utilized the Infit mean-square statistic. This statistic is a chi-square measure divided by the degrees of freedom and is weighted with information. It demonstrates higher sensitivity to unexpected patterns in responses to items near a person’s proficiency level (Linacre, 2002 ). Fit statistics are assessed based on predefined thresholds for acceptable fit. For the Infit MNSQ, which has a mean of 1.00, different thresholds have been suggested. Some propose stricter thresholds ranging from 0.7 to 1.3 (Bond et al. 2021 ), while others suggest more lenient thresholds ranging from 0.5 to 1.5 (Eckes, 2009 ). In this study, we adopted the criterion of 0.70–1.30 for the Infit MNSQ.

Moving forward, we can now proceed to assess the effectiveness of the 16 proposed measures based on five criteria for accurately distinguishing various levels of writing proficiency among non-native Japanese speakers. To conduct this evaluation, we utilized the development dataset from the I-JAS corpus, as described in Section Dataset . Table 4 provides a measurement report that presents the performance details of the 14 metrics under consideration. The measure separation was found to be 4.02, indicating a clear differentiation among the measures. The reliability index for the measure separation was 0.891, suggesting consistency in the measurement. Similarly, the person separation reliability index was 0.802, indicating the accuracy of the assessment in distinguishing between individuals. All 16 measures demonstrated Infit mean squares within a reasonable range, ranging from 0.76 to 1.28. The Synonym overlap/paragraph (topic) measure exhibited a relatively high outfit mean square of 1.46, although the Infit mean square falls within an acceptable range. The standard error for the measures ranged from 0.13 to 0.28, indicating the precision of the estimates.

Table 5 further illustrated the weights assigned to different linguistic measures for score prediction, with higher weights indicating stronger correlations between those measures and higher scores. Specifically, the following measures exhibited higher weights compared to others: moving average type token ratio per essay has a weight of 0.0391. Mean dependency distance had a weight of 0.0388. Mean length of clause, calculated by dividing the number of words by the number of clauses, had a weight of 0.0374. Complex nominals per T-unit, calculated by dividing the number of complex nominals by the number of T-units, had a weight of 0.0379. Coordinate phrases rate, calculated by dividing the number of coordinate phrases by the number of clauses, had a weight of 0.0325. Grammatical error rate, representing the number of errors per essay, had a weight of 0.0322.

Criteria (output indicator)

The criteria used to evaluate the writing ability in this study were based on CEFR, which follows a six-point scale ranging from A1 to C2. To assess the quality of Japanese writing, the scoring criteria from Table 6 were utilized. These criteria were derived from the IELTS writing standards and served as assessment guidelines and prompts for the written output.

A prompt is a question or detailed instruction that is provided to the model to obtain a proper response. After several pilot experiments, we decided to provide the measures (Section Measures of writing proficiency for nonnative Japanese ) as the input prompt and use the criteria (Section Criteria (output indicator) ) as the output indicator. Regarding the prompt language, considering that the LLM was tasked with rating Japanese essays, would prompt in Japanese works better Footnote 5 ? We conducted experiments comparing the performance of GPT-4 using both English and Japanese prompts. Additionally, we utilized the Japanese local model OCLL with Japanese prompts. Multiple trials were conducted using the same sample. Regardless of the prompt language used, we consistently obtained the same grading results with GPT-4, which assigned a grade of B1 to the writing sample. This suggested that GPT-4 is reliable and capable of producing consistent ratings regardless of the prompt language. On the other hand, when we used Japanese prompts with the Japanese local model “OCLL”, we encountered inconsistent grading results. Out of 10 attempts with OCLL, only 6 yielded consistent grading results (B1), while the remaining 4 showed different outcomes, including A1 and B2 grades. These findings indicated that the language of the prompt was not the determining factor for reliable AES. Instead, the size of the training data and the model parameters played crucial roles in achieving consistent and reliable AES results for the language model.

The following is the utilized prompt, which details all measures and requires the LLM to score the essays using holistic and trait scores.

Please evaluate Japanese essays written by Japanese learners and assign a score to each essay on a six-point scale, ranging from A1, A2, B1, B2, C1 to C2. Additionally, please provide trait scores and display the calculation process for each trait score. The scoring should be based on the following criteria:

Moving average type-token ratio.

Number of lexical words (token) divided by the total number of words per essay.

Number of sophisticated word types divided by the total number of words per essay.

Mean length of clause.

Verb phrases per T-unit.

Clauses per T-unit.

Dependent clauses per T-unit.

Complex nominals per clause.

Adverbial clauses per clause.

Coordinate phrases per clause.

Mean dependency distance.

Synonym overlap paragraph (topic and keywords).

Word2vec cosine similarity.

Connectives per essay.

Conjunctions per essay.

Number of metadiscourse markers (types) divided by the total number of words.

Number of errors per essay.

Japanese essay text

出かける前に二人が地図を見ている間に、サンドイッチを入れたバスケットに犬が入ってしまいました。それに気づかずに二人は楽しそうに出かけて行きました。やがて突然犬がバスケットから飛び出し、二人は驚きました。バスケット の 中を見ると、食べ物はすべて犬に食べられていて、二人は困ってしまいました。(ID_JJJ01_SW1)

The score of the example above was B1. Figure 3 provides an example of holistic and trait scores provided by GPT-4 (with a prompt indicating all measures) via Bing Footnote 6 .

figure 3

Example of GPT-4 AES and feedback (with a prompt indicating all measures).

Statistical analysis

The aim of this study is to investigate the potential use of LLM for nonnative Japanese AES. It seeks to compare the scoring outcomes obtained from feature-based AES tools, which rely on conventional machine learning technology (i.e. Jess, JWriter), with those generated by AI-driven AES tools utilizing deep learning technology (BERT, GPT, OCLL). To assess the reliability of a computer-assisted annotation tool, the study initially established human-human agreement as the benchmark measure. Subsequently, the performance of the LLM-based method was evaluated by comparing it to human-human agreement.

To assess annotation agreement, the study employed standard measures such as precision, recall, and F-score (Brants 2000 ; Lu 2010 ), along with the quadratically weighted kappa (QWK) to evaluate the consistency and agreement in the annotation process. Assume A and B represent human annotators. When comparing the annotations of the two annotators, the following results are obtained. The evaluation of precision, recall, and F-score metrics was illustrated in equations (13) to (15).

\({\rm{Recall}}(A,B)=\frac{{\rm{Number}}\,{\rm{of}}\,{\rm{identical}}\,{\rm{nodes}}\,{\rm{in}}\,A\,{\rm{and}}\,B}{{\rm{Number}}\,{\rm{of}}\,{\rm{nodes}}\,{\rm{in}}\,A}\)

\({\rm{Precision}}(A,\,B)=\frac{{\rm{Number}}\,{\rm{of}}\,{\rm{identical}}\,{\rm{nodes}}\,{\rm{in}}\,A\,{\rm{and}}\,B}{{\rm{Number}}\,{\rm{of}}\,{\rm{nodes}}\,{\rm{in}}\,B}\)

The F-score is the harmonic mean of recall and precision:

\({\rm{F}}-{\rm{score}}=\frac{2* ({\rm{Precision}}* {\rm{Recall}})}{{\rm{Precision}}+{\rm{Recall}}}\)

The highest possible value of an F-score is 1.0, indicating perfect precision and recall, and the lowest possible value is 0, if either precision or recall are zero.

In accordance with Taghipour and Ng ( 2016 ), the calculation of QWK involves two steps:

Step 1: Construct a weight matrix W as follows:

\({W}_{{ij}}=\frac{{(i-j)}^{2}}{{(N-1)}^{2}}\)

i represents the annotation made by the tool, while j represents the annotation made by a human rater. N denotes the total number of possible annotations. Matrix O is subsequently computed, where O_( i, j ) represents the count of data annotated by the tool ( i ) and the human annotator ( j ). On the other hand, E refers to the expected count matrix, which undergoes normalization to ensure that the sum of elements in E matches the sum of elements in O.

Step 2: With matrices O and E, the QWK is obtained as follows:

K = 1- \(\frac{\sum i,j{W}_{i,j}\,{O}_{i,j}}{\sum i,j{W}_{i,j}\,{E}_{i,j}}\)

The value of the quadratic weighted kappa increases as the level of agreement improves. Further, to assess the accuracy of LLM scoring, the proportional reductive mean square error (PRMSE) was employed. The PRMSE approach takes into account the variability observed in human ratings to estimate the rater error, which is then subtracted from the variance of the human labels. This calculation provides an overall measure of agreement between the automated scores and true scores (Haberman et al. 2015 ; Loukina et al. 2020 ; Taghipour and Ng, 2016 ). The computation of PRMSE involves the following steps:

Step 1: Calculate the mean squared errors (MSEs) for the scoring outcomes of the computer-assisted tool (MSE tool) and the human scoring outcomes (MSE human).

Step 2: Determine the PRMSE by comparing the MSE of the computer-assisted tool (MSE tool) with the MSE from human raters (MSE human), using the following formula:

\({\rm{PRMSE}}=1-\frac{({\rm{MSE}}\,{\rm{tool}})\,}{({\rm{MSE}}\,{\rm{human}})\,}=1-\,\frac{{\sum }_{i}^{n}=1{({{\rm{y}}}_{i}-{\hat{{\rm{y}}}}_{{\rm{i}}})}^{2}}{{\sum }_{i}^{n}=1{({{\rm{y}}}_{i}-\hat{{\rm{y}}})}^{2}}\)

In the numerator, ŷi represents the scoring outcome predicted by a specific LLM-driven AES system for a given sample. The term y i − ŷ i represents the difference between this predicted outcome and the mean value of all LLM-driven AES systems’ scoring outcomes. It quantifies the deviation of the specific LLM-driven AES system’s prediction from the average prediction of all LLM-driven AES systems. In the denominator, y i − ŷ represents the difference between the scoring outcome provided by a specific human rater for a given sample and the mean value of all human raters’ scoring outcomes. It measures the discrepancy between the specific human rater’s score and the average score given by all human raters. The PRMSE is then calculated by subtracting the ratio of the MSE tool to the MSE human from 1. PRMSE falls within the range of 0 to 1, with larger values indicating reduced errors in LLM’s scoring compared to those of human raters. In other words, a higher PRMSE implies that LLM’s scoring demonstrates greater accuracy in predicting the true scores (Loukina et al. 2020 ). The interpretation of kappa values, ranging from 0 to 1, is based on the work of Landis and Koch ( 1977 ). Specifically, the following categories are assigned to different ranges of kappa values: −1 indicates complete inconsistency, 0 indicates random agreement, 0.0 ~ 0.20 indicates extremely low level of agreement (slight), 0.21 ~ 0.40 indicates moderate level of agreement (fair), 0.41 ~ 0.60 indicates medium level of agreement (moderate), 0.61 ~ 0.80 indicates high level of agreement (substantial), 0.81 ~ 1 indicates almost perfect level of agreement. All statistical analyses were executed using Python script.

Results and discussion

Annotation reliability of the llm.

This section focuses on assessing the reliability of the LLM’s annotation and scoring capabilities. To evaluate the reliability, several tests were conducted simultaneously, aiming to achieve the following objectives:

Assess the LLM’s ability to differentiate between test takers with varying levels of oral proficiency.

Determine the level of agreement between the annotations and scoring performed by the LLM and those done by human raters.

The evaluation of the results encompassed several metrics, including: precision, recall, F-Score, quadratically-weighted kappa, proportional reduction of mean squared error, Pearson correlation, and multi-faceted Rasch measurement.

Inter-annotator agreement (human–human annotator agreement)

We started with an agreement test of the two human annotators. Two trained annotators were recruited to determine the writing task data measures. A total of 714 scripts, as the test data, was utilized. Each analysis lasted 300–360 min. Inter-annotator agreement was evaluated using the standard measures of precision, recall, and F-score and QWK. Table 7 presents the inter-annotator agreement for the various indicators. As shown, the inter-annotator agreement was fairly high, with F-scores ranging from 1.0 for sentence and word number to 0.666 for grammatical errors.

The findings from the QWK analysis provided further confirmation of the inter-annotator agreement. The QWK values covered a range from 0.950 ( p  = 0.000) for sentence and word number to 0.695 for synonym overlap number (keyword) and grammatical errors ( p  = 0.001).

Agreement of annotation outcomes between human and LLM

To evaluate the consistency between human annotators and LLM annotators (BERT, GPT, OCLL) across the indices, the same test was conducted. The results of the inter-annotator agreement (F-score) between LLM and human annotation are provided in Appendix B-D. The F-scores ranged from 0.706 for Grammatical error # for OCLL-human to a perfect 1.000 for GPT-human, for sentences, clauses, T-units, and words. These findings were further supported by the QWK analysis, which showed agreement levels ranging from 0.807 ( p  = 0.001) for metadiscourse markers for OCLL-human to 0.962 for words ( p  = 0.000) for GPT-human. The findings demonstrated that the LLM annotation achieved a significant level of accuracy in identifying measurement units and counts.

Reliability of LLM-driven AES’s scoring and discriminating proficiency levels

This section examines the reliability of the LLM-driven AES scoring through a comparison of the scoring outcomes produced by human raters and the LLM ( Reliability of LLM-driven AES scoring ). It also assesses the effectiveness of the LLM-based AES system in differentiating participants with varying proficiency levels ( Reliability of LLM-driven AES discriminating proficiency levels ).

Reliability of LLM-driven AES scoring

Table 8 summarizes the QWK coefficient analysis between the scores computed by the human raters and the GPT-4 for the individual essays from I-JAS Footnote 7 . As shown, the QWK of all measures ranged from k  = 0.819 for lexical density (number of lexical words (tokens)/number of words per essay) to k  = 0.644 for word2vec cosine similarity. Table 9 further presents the Pearson correlations between the 16 writing proficiency measures scored by human raters and GPT 4 for the individual essays. The correlations ranged from 0.672 for syntactic complexity to 0.734 for grammatical accuracy. The correlations between the writing proficiency scores assigned by human raters and the BERT-based AES system were found to range from 0.661 for syntactic complexity to 0.713 for grammatical accuracy. The correlations between the writing proficiency scores given by human raters and the OCLL-based AES system ranged from 0.654 for cohesion to 0.721 for grammatical accuracy. These findings indicated an alignment between the assessments made by human raters and both the BERT-based and OCLL-based AES systems in terms of various aspects of writing proficiency.

Reliability of LLM-driven AES discriminating proficiency levels

After validating the reliability of the LLM’s annotation and scoring, the subsequent objective was to evaluate its ability to distinguish between various proficiency levels. For this analysis, a dataset of 686 individual essays was utilized. Table 10 presents a sample of the results, summarizing the means, standard deviations, and the outcomes of the one-way ANOVAs based on the measures assessed by the GPT-4 model. A post hoc multiple comparison test, specifically the Bonferroni test, was conducted to identify any potential differences between pairs of levels.

As the results reveal, seven measures presented linear upward or downward progress across the three proficiency levels. These were marked in bold in Table 10 and comprise one measure of lexical richness, i.e. MATTR (lexical diversity); four measures of syntactic complexity, i.e. MDD (mean dependency distance), MLC (mean length of clause), CNT (complex nominals per T-unit), CPC (coordinate phrases rate); one cohesion measure, i.e. word2vec cosine similarity and GER (grammatical error rate). Regarding the ability of the sixteen measures to distinguish adjacent proficiency levels, the Bonferroni tests indicated that statistically significant differences exist between the primary level and the intermediate level for MLC and GER. One measure of lexical richness, namely LD, along with three measures of syntactic complexity (VPT, CT, DCT, ACC), two measures of cohesion (SOPT, SOPK), and one measure of content elaboration (IMM), exhibited statistically significant differences between proficiency levels. However, these differences did not demonstrate a linear progression between adjacent proficiency levels. No significant difference was observed in lexical sophistication between proficiency levels.

To summarize, our study aimed to evaluate the reliability and differentiation capabilities of the LLM-driven AES method. For the first objective, we assessed the LLM’s ability to differentiate between test takers with varying levels of oral proficiency using precision, recall, F-Score, and quadratically-weighted kappa. Regarding the second objective, we compared the scoring outcomes generated by human raters and the LLM to determine the level of agreement. We employed quadratically-weighted kappa and Pearson correlations to compare the 16 writing proficiency measures for the individual essays. The results confirmed the feasibility of using the LLM for annotation and scoring in AES for nonnative Japanese. As a result, Research Question 1 has been addressed.

Comparison of BERT-, GPT-, OCLL-based AES, and linguistic-feature-based computation methods

This section aims to compare the effectiveness of five AES methods for nonnative Japanese writing, i.e. LLM-driven approaches utilizing BERT, GPT, and OCLL, linguistic feature-based approaches using Jess and JWriter. The comparison was conducted by comparing the ratings obtained from each approach with human ratings. All ratings were derived from the dataset introduced in Dataset . To facilitate the comparison, the agreement between the automated methods and human ratings was assessed using QWK and PRMSE. The performance of each approach was summarized in Table 11 .

The QWK coefficient values indicate that LLMs (GPT, BERT, OCLL) and human rating outcomes demonstrated higher agreement compared to feature-based AES methods (Jess and JWriter) in assessing writing proficiency criteria, including lexical richness, syntactic complexity, content, and grammatical accuracy. Among the LLMs, the GPT-4 driven AES and human rating outcomes showed the highest agreement in all criteria, except for syntactic complexity. The PRMSE values suggest that the GPT-based method outperformed linguistic feature-based methods and other LLM-based approaches. Moreover, an interesting finding emerged during the study: the agreement coefficient between GPT-4 and human scoring was even higher than the agreement between different human raters themselves. This discovery highlights the advantage of GPT-based AES over human rating. Ratings involve a series of processes, including reading the learners’ writing, evaluating the content and language, and assigning scores. Within this chain of processes, various biases can be introduced, stemming from factors such as rater biases, test design, and rating scales. These biases can impact the consistency and objectivity of human ratings. GPT-based AES may benefit from its ability to apply consistent and objective evaluation criteria. By prompting the GPT model with detailed writing scoring rubrics and linguistic features, potential biases in human ratings can be mitigated. The model follows a predefined set of guidelines and does not possess the same subjective biases that human raters may exhibit. This standardization in the evaluation process contributes to the higher agreement observed between GPT-4 and human scoring. Section Prompt strategy of the study delves further into the role of prompts in the application of LLMs to AES. It explores how the choice and implementation of prompts can impact the performance and reliability of LLM-based AES methods. Furthermore, it is important to acknowledge the strengths of the local model, i.e. the Japanese local model OCLL, which excels in processing certain idiomatic expressions. Nevertheless, our analysis indicated that GPT-4 surpasses local models in AES. This superior performance can be attributed to the larger parameter size of GPT-4, estimated to be between 500 billion and 1 trillion, which exceeds the sizes of both BERT and the local model OCLL.

Prompt strategy

In the context of prompt strategy, Mizumoto and Eguchi ( 2023 ) conducted a study where they applied the GPT-3 model to automatically score English essays in the TOEFL test. They found that the accuracy of the GPT model alone was moderate to fair. However, when they incorporated linguistic measures such as cohesion, syntactic complexity, and lexical features alongside the GPT model, the accuracy significantly improved. This highlights the importance of prompt engineering and providing the model with specific instructions to enhance its performance. In this study, a similar approach was taken to optimize the performance of LLMs. GPT-4, which outperformed BERT and OCLL, was selected as the candidate model. Model 1 was used as the baseline, representing GPT-4 without any additional prompting. Model 2, on the other hand, involved GPT-4 prompted with 16 measures that included scoring criteria, efficient linguistic features for writing assessment, and detailed measurement units and calculation formulas. The remaining models (Models 3 to 18) utilized GPT-4 prompted with individual measures. The performance of these 18 different models was assessed using the output indicators described in Section Criteria (output indicator) . By comparing the performances of these models, the study aimed to understand the impact of prompt engineering on the accuracy and effectiveness of GPT-4 in AES tasks.

Based on the PRMSE scores presented in Fig. 4 , it was observed that Model 1, representing GPT-4 without any additional prompting, achieved a fair level of performance. However, Model 2, which utilized GPT-4 prompted with all measures, outperformed all other models in terms of PRMSE score, achieving a score of 0.681. These results indicate that the inclusion of specific measures and prompts significantly enhanced the performance of GPT-4 in AES. Among the measures, syntactic complexity was found to play a particularly significant role in improving the accuracy of GPT-4 in assessing writing quality. Following that, lexical diversity emerged as another important factor contributing to the model’s effectiveness. The study suggests that a well-prompted GPT-4 can serve as a valuable tool to support human assessors in evaluating writing quality. By utilizing GPT-4 as an automated scoring tool, the evaluation biases associated with human raters can be minimized. This has the potential to empower teachers by allowing them to focus on designing writing tasks and guiding writing strategies, while leveraging the capabilities of GPT-4 for efficient and reliable scoring.

figure 4

PRMSE scores of the 18 AES models.

This study aimed to investigate two main research questions: the feasibility of utilizing LLMs for AES and the impact of prompt engineering on the application of LLMs in AES.

To address the first objective, the study compared the effectiveness of five different models: GPT, BERT, the Japanese local LLM (OCLL), and two conventional machine learning-based AES tools (Jess and JWriter). The PRMSE values indicated that the GPT-4-based method outperformed other LLMs (BERT, OCLL) and linguistic feature-based computational methods (Jess and JWriter) across various writing proficiency criteria. Furthermore, the agreement coefficient between GPT-4 and human scoring surpassed the agreement among human raters themselves, highlighting the potential of using the GPT-4 tool to enhance AES by reducing biases and subjectivity, saving time, labor, and cost, and providing valuable feedback for self-study. Regarding the second goal, the role of prompt design was investigated by comparing 18 models, including a baseline model, a model prompted with all measures, and 16 models prompted with one measure at a time. GPT-4, which outperformed BERT and OCLL, was selected as the candidate model. The PRMSE scores of the models showed that GPT-4 prompted with all measures achieved the best performance, surpassing the baseline and other models.

In conclusion, this study has demonstrated the potential of LLMs in supporting human rating in assessments. By incorporating automation, we can save time and resources while reducing biases and subjectivity inherent in human rating processes. Automated language assessments offer the advantage of accessibility, providing equal opportunities and economic feasibility for individuals who lack access to traditional assessment centers or necessary resources. LLM-based language assessments provide valuable feedback and support to learners, aiding in the enhancement of their language proficiency and the achievement of their goals. This personalized feedback can cater to individual learner needs, facilitating a more tailored and effective language-learning experience.

There are three important areas that merit further exploration. First, prompt engineering requires attention to ensure optimal performance of LLM-based AES across different language types. This study revealed that GPT-4, when prompted with all measures, outperformed models prompted with fewer measures. Therefore, investigating and refining prompt strategies can enhance the effectiveness of LLMs in automated language assessments. Second, it is crucial to explore the application of LLMs in second-language assessment and learning for oral proficiency, as well as their potential in under-resourced languages. Recent advancements in self-supervised machine learning techniques have significantly improved automatic speech recognition (ASR) systems, opening up new possibilities for creating reliable ASR systems, particularly for under-resourced languages with limited data. However, challenges persist in the field of ASR. First, ASR assumes correct word pronunciation for automatic pronunciation evaluation, which proves challenging for learners in the early stages of language acquisition due to diverse accents influenced by their native languages. Accurately segmenting short words becomes problematic in such cases. Second, developing precise audio-text transcriptions for languages with non-native accented speech poses a formidable task. Last, assessing oral proficiency levels involves capturing various linguistic features, including fluency, pronunciation, accuracy, and complexity, which are not easily captured by current NLP technology.

Data availability

The dataset utilized was obtained from the International Corpus of Japanese as a Second Language (I-JAS). The data URLs: [ https://www2.ninjal.ac.jp/jll/lsaj/ihome2.html ].

J-CAT and TTBJ are two computerized adaptive tests used to assess Japanese language proficiency.

SPOT is a specific component of the TTBJ test.

J-CAT: https://www.j-cat2.org/html/ja/pages/interpret.html

SPOT: https://ttbj.cegloc.tsukuba.ac.jp/p1.html#SPOT .

The study utilized a prompt-based GPT-4 model, developed by OpenAI, which has an impressive architecture with 1.8 trillion parameters across 120 layers. GPT-4 was trained on a vast dataset of 13 trillion tokens, using two stages: initial training on internet text datasets to predict the next token, and subsequent fine-tuning through reinforcement learning from human feedback.

https://www2.ninjal.ac.jp/jll/lsaj/ihome2-en.html .

http://jhlee.sakura.ne.jp/JEV/ by Japanese Learning Dictionary Support Group 2015.

We express our sincere gratitude to the reviewer for bringing this matter to our attention.

On February 7, 2023, Microsoft began rolling out a major overhaul to Bing that included a new chatbot feature based on OpenAI’s GPT-4 (Bing.com).

Appendix E-F present the analysis results of the QWK coefficient between the scores computed by the human raters and the BERT, OCLL models.

Attali Y, Burstein J (2006) Automated essay scoring with e-rater® V.2. J. Technol., Learn. Assess., 4

Barkaoui K, Hadidi A (2020) Assessing Change in English Second Language Writing Performance (1st ed.). Routledge, New York. https://doi.org/10.4324/9781003092346

Bentz C, Tatyana R, Koplenig A, Tanja S (2016) A comparison between morphological complexity. measures: Typological data vs. language corpora. In Proceedings of the workshop on computational linguistics for linguistic complexity (CL4LC), 142–153. Osaka, Japan: The COLING 2016 Organizing Committee

Bond TG, Yan Z, Heene M (2021) Applying the Rasch model: Fundamental measurement in the human sciences (4th ed). Routledge

Brants T (2000) Inter-annotator agreement for a German newspaper corpus. Proceedings of the Second International Conference on Language Resources and Evaluation (LREC’00), Athens, Greece, 31 May-2 June, European Language Resources Association

Brown TB, Mann B, Ryder N, et al. (2020) Language models are few-shot learners. Advances in Neural Information Processing Systems, Online, 6–12 December, Curran Associates, Inc., Red Hook, NY

Burstein J (2003) The E-rater scoring engine: Automated essay scoring with natural language processing. In Shermis MD and Burstein JC (ed) Automated Essay Scoring: A Cross-Disciplinary Perspective. Lawrence Erlbaum Associates, Mahwah, NJ

Čech R, Miroslav K (2018) Morphological richness of text. In Masako F, Václav C (ed) Taming the corpus: From inflection and lexis to interpretation, 63–77. Cham, Switzerland: Springer Nature

Çöltekin Ç, Taraka, R (2018) Exploiting Universal Dependencies treebanks for measuring morphosyntactic complexity. In Aleksandrs B, Christian B (ed), Proceedings of first workshop on measuring language complexity, 1–7. Torun, Poland

Crossley SA, Cobb T, McNamara DS (2013) Comparing count-based and band-based indices of word frequency: Implications for active vocabulary research and pedagogical applications. System 41:965–981. https://doi.org/10.1016/j.system.2013.08.002

Article   Google Scholar  

Crossley SA, McNamara DS (2016) Say more and be more coherent: How text elaboration and cohesion can increase writing quality. J. Writ. Res. 7:351–370

CyberAgent Inc (2023) Open-Calm series of Japanese language models. Retrieved from: https://www.cyberagent.co.jp/news/detail/id=28817

Devlin J, Chang MW, Lee K, Toutanova K (2019) BERT: Pre-training of deep bidirectional transformers for language understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, Minneapolis, Minnesota, 2–7 June, pp. 4171–4186. Association for Computational Linguistics

Diez-Ortega M, Kyle K (2023) Measuring the development of lexical richness of L2 Spanish: a longitudinal learner corpus study. Studies in Second Language Acquisition 1-31

Eckes T (2009) On common ground? How raters perceive scoring criteria in oral proficiency testing. In Brown A, Hill K (ed) Language testing and evaluation 13: Tasks and criteria in performance assessment (pp. 43–73). Peter Lang Publishing

Elliot S (2003) IntelliMetric: from here to validity. In: Shermis MD, Burstein JC (ed) Automated Essay Scoring: A Cross-Disciplinary Perspective. Lawrence Erlbaum Associates, Mahwah, NJ

Google Scholar  

Engber CA (1995) The relationship of lexical proficiency to the quality of ESL compositions. J. Second Lang. Writ. 4:139–155

Garner J, Crossley SA, Kyle K (2019) N-gram measures and L2 writing proficiency. System 80:176–187. https://doi.org/10.1016/j.system.2018.12.001

Haberman SJ (2008) When can subscores have value? J. Educat. Behav. Stat., 33:204–229

Haberman SJ, Yao L, Sinharay S (2015) Prediction of true test scores from observed item scores and ancillary data. Brit. J. Math. Stat. Psychol. 68:363–385

Halliday MAK (1985) Spoken and Written Language. Deakin University Press, Melbourne, Australia

Hirao R, Arai M, Shimanaka H et al. (2020) Automated essay scoring system for nonnative Japanese learners. Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020), pp. 1250–1257. European Language Resources Association

Hunt KW (1966) Recent Measures in Syntactic Development. Elementary English, 43(7), 732–739. http://www.jstor.org/stable/41386067

Ishioka T (2001) About e-rater, a computer-based automatic scoring system for essays [Konpyūta ni yoru essei no jidō saiten shisutemu e − rater ni tsuite]. University Entrance Examination. Forum [Daigaku nyūshi fōramu] 24:71–76

Hochreiter S, Schmidhuber J (1997) Long short- term memory. Neural Comput. 9(8):1735–1780

Article   CAS   PubMed   Google Scholar  

Ishioka T, Kameda M (2006) Automated Japanese essay scoring system based on articles written by experts. Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, Sydney, Australia, 17–18 July 2006, pp. 233-240. Association for Computational Linguistics, USA

Japan Foundation (2021) Retrieved from: https://www.jpf.gp.jp/j/project/japanese/survey/result/dl/survey2021/all.pdf

Jarvis S (2013a) Defining and measuring lexical diversity. In Jarvis S, Daller M (ed) Vocabulary knowledge: Human ratings and automated measures (Vol. 47, pp. 13–44). John Benjamins. https://doi.org/10.1075/sibil.47.03ch1

Jarvis S (2013b) Capturing the diversity in lexical diversity. Lang. Learn. 63:87–106. https://doi.org/10.1111/j.1467-9922.2012.00739.x

Jiang J, Quyang J, Liu H (2019) Interlanguage: A perspective of quantitative linguistic typology. Lang. Sci. 74:85–97

Kim M, Crossley SA, Kyle K (2018) Lexical sophistication as a multidimensional phenomenon: Relations to second language lexical proficiency, development, and writing quality. Mod. Lang. J. 102(1):120–141. https://doi.org/10.1111/modl.12447

Kojima T, Gu S, Reid M et al. (2022) Large language models are zero-shot reasoners. Advances in Neural Information Processing Systems, New Orleans, LA, 29 November-1 December, Curran Associates, Inc., Red Hook, NY

Kyle K, Crossley SA (2015) Automatically assessing lexical sophistication: Indices, tools, findings, and application. TESOL Q 49:757–786

Kyle K, Crossley SA, Berger CM (2018) The tool for the automatic analysis of lexical sophistication (TAALES): Version 2.0. Behav. Res. Methods 50:1030–1046. https://doi.org/10.3758/s13428-017-0924-4

Article   PubMed   Google Scholar  

Kyle K, Crossley SA, Jarvis S (2021) Assessing the validity of lexical diversity using direct judgements. Lang. Assess. Q. 18:154–170. https://doi.org/10.1080/15434303.2020.1844205

Landauer TK, Laham D, Foltz PW (2003) Automated essay scoring and annotation of essays with the Intelligent Essay Assessor. In Shermis MD, Burstein JC (ed), Automated Essay Scoring: A Cross-Disciplinary Perspective. Lawrence Erlbaum Associates, Mahwah, NJ

Landis JR, Koch GG (1977) The measurement of observer agreement for categorical data. Biometrics 159–174

Laufer B, Nation P (1995) Vocabulary size and use: Lexical richness in L2 written production. Appl. Linguist. 16:307–322. https://doi.org/10.1093/applin/16.3.307

Lee J, Hasebe Y (2017) jWriter Learner Text Evaluator, URL: https://jreadability.net/jwriter/

Lee J, Kobayashi N, Sakai T, Sakota K (2015) A Comparison of SPOT and J-CAT Based on Test Analysis [Tesuto bunseki ni motozuku ‘SPOT’ to ‘J-CAT’ no hikaku]. Research on the Acquisition of Second Language Japanese [Dainigengo to shite no nihongo no shūtoku kenkyū] (18) 53–69

Li W, Yan J (2021) Probability distribution of dependency distance based on a Treebank of. Japanese EFL Learners’ Interlanguage. J. Quant. Linguist. 28(2):172–186. https://doi.org/10.1080/09296174.2020.1754611

Article   MathSciNet   Google Scholar  

Linacre JM (2002) Optimizing rating scale category effectiveness. J. Appl. Meas. 3(1):85–106

PubMed   Google Scholar  

Linacre JM (1994) Constructing measurement with a Many-Facet Rasch Model. In Wilson M (ed) Objective measurement: Theory into practice, Volume 2 (pp. 129–144). Norwood, NJ: Ablex

Liu H (2008) Dependency distance as a metric of language comprehension difficulty. J. Cognitive Sci. 9:159–191

Liu H, Xu C, Liang J (2017) Dependency distance: A new perspective on syntactic patterns in natural languages. Phys. Life Rev. 21. https://doi.org/10.1016/j.plrev.2017.03.002

Loukina A, Madnani N, Cahill A, et al. (2020) Using PRMSE to evaluate automated scoring systems in the presence of label noise. Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, Seattle, WA, USA → Online, 10 July, pp. 18–29. Association for Computational Linguistics

Lu X (2010) Automatic analysis of syntactic complexity in second language writing. Int. J. Corpus Linguist. 15:474–496

Lu X (2012) The relationship of lexical richness to the quality of ESL learners’ oral narratives. Mod. Lang. J. 96:190–208

Lu X (2017) Automated measurement of syntactic complexity in corpus-based L2 writing research and implications for writing assessment. Lang. Test. 34:493–511

Lu X, Hu R (2022) Sense-aware lexical sophistication indices and their relationship to second language writing quality. Behav. Res. Method. 54:1444–1460. https://doi.org/10.3758/s13428-021-01675-6

Ministry of Health, Labor, and Welfare of Japan (2022) Retrieved from: https://www.mhlw.go.jp/stf/newpage_30367.html

Mizumoto A, Eguchi M (2023) Exploring the potential of using an AI language model for automated essay scoring. Res. Methods Appl. Linguist. 3:100050

Okgetheng B, Takeuchi K (2024) Estimating Japanese Essay Grading Scores with Large Language Models. Proceedings of 30th Annual Conference of the Language Processing Society in Japan, March 2024

Ortega L (2015) Second language learning explained? SLA across 10 contemporary theories. In VanPatten B, Williams J (ed) Theories in Second Language Acquisition: An Introduction

Rae JW, Borgeaud S, Cai T, et al. (2021) Scaling Language Models: Methods, Analysis & Insights from Training Gopher. ArXiv, abs/2112.11446

Read J (2000) Assessing vocabulary. Cambridge University Press. https://doi.org/10.1017/CBO9780511732942

Rudner LM, Liang T (2002) Automated Essay Scoring Using Bayes’ Theorem. J. Technol., Learning and Assessment, 1 (2)

Sakoda K, Hosoi Y (2020) Accuracy and complexity of Japanese Language usage by SLA learners in different learning environments based on the analysis of I-JAS, a learners’ corpus of Japanese as L2. Math. Linguist. 32(7):403–418. https://doi.org/10.24701/mathling.32.7_403

Suzuki N (1999) Summary of survey results regarding comprehensive essay questions. Final report of “Joint Research on Comprehensive Examinations for the Aim of Evaluating Applicability to Each Specialized Field of Universities” for 1996-2000 [shōronbun sōgō mondai ni kansuru chōsa kekka no gaiyō. Heisei 8 - Heisei 12-nendo daigaku no kaku senmon bun’ya e no tekisei no hyōka o mokuteki to suru sōgō shiken no arikata ni kansuru kyōdō kenkyū’ saishū hōkoku-sho]. University Entrance Examination Section Center Research and Development Department [Daigaku nyūshi sentā kenkyū kaihatsubu], 21–32

Taghipour K, Ng HT (2016) A neural approach to automated essay scoring. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, 1–5 November, pp. 1882–1891. Association for Computational Linguistics

Takeuchi K, Ohno M, Motojin K, Taguchi M, Inada Y, Iizuka M, Abo T, Ueda H (2021) Development of essay scoring methods based on reference texts with construction of research-available Japanese essay data. In IPSJ J 62(9):1586–1604

Ure J (1971) Lexical density: A computational technique and some findings. In Coultard M (ed) Talking about Text. English Language Research, University of Birmingham, Birmingham, England

Vaswani A, Shazeer N, Parmar N, et al. (2017) Attention is all you need. In Advances in Neural Information Processing Systems, Long Beach, CA, 4–7 December, pp. 5998–6008, Curran Associates, Inc., Red Hook, NY

Watanabe H, Taira Y, Inoue Y (1988) Analysis of essay evaluation data [Shōronbun hyōka dēta no kaiseki]. Bulletin of the Faculty of Education, University of Tokyo [Tōkyōdaigaku kyōiku gakubu kiyō], Vol. 28, 143–164

Yao S, Yu D, Zhao J, et al. (2023) Tree of thoughts: Deliberate problem solving with large language models. Advances in Neural Information Processing Systems, 36

Zenker F, Kyle K (2021) Investigating minimum text lengths for lexical diversity indices. Assess. Writ. 47:100505. https://doi.org/10.1016/j.asw.2020.100505

Zhang Y, Warstadt A, Li X, et al. (2021) When do you need billions of words of pretraining data? Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Online, pp. 1112-1125. Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.acl-long.90

Download references

This research was funded by National Foundation of Social Sciences (22BYY186) to Wenchao Li.

Author information

Authors and affiliations.

Department of Japanese Studies, Zhejiang University, Hangzhou, China

Department of Linguistics and Applied Linguistics, Zhejiang University, Hangzhou, China

You can also search for this author in PubMed   Google Scholar

Contributions

Wenchao Li is in charge of conceptualization, validation, formal analysis, investigation, data curation, visualization and writing the draft. Haitao Liu is in charge of supervision.

Corresponding author

Correspondence to Wenchao Li .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

Ethical approval was not required as the study did not involve human participants.

Informed consent

This article does not contain any studies with human participants performed by any of the authors.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplemental material file #1, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Li, W., Liu, H. Applying large language models for automated essay scoring for non-native Japanese. Humanit Soc Sci Commun 11 , 723 (2024). https://doi.org/10.1057/s41599-024-03209-9

Download citation

Received : 02 February 2024

Accepted : 16 May 2024

Published : 03 June 2024

DOI : https://doi.org/10.1057/s41599-024-03209-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

common app essay prompts for 2021

IMAGES

  1. How to Write Common App Essay Prompts 2021

    common app essay prompts for 2021

  2. Common App Essay Prompts for the Class of 2022 in 2021

    common app essay prompts for 2021

  3. Common App Essay Prompts Class Of 2021 Guide at apps

    common app essay prompts for 2021

  4. 2021-22 Common App Essay Prompts

    common app essay prompts for 2021

  5. 2021-2022 Common App Essay Prompts Announced

    common app essay prompts for 2021

  6. Common App Essay Prompts and Essay Examples 2020-2021

    common app essay prompts for 2021

VIDEO

  1. How to NAIL the UC Essay Prompts (pt. 3)

  2. The Common App essay prompts 2022-2023 download my free Common App guide at appworkshopinc.com

  3. 2017 Common App Essay Prompt Tips and Advice Video

  4. How to Write the Common App Essay Prompt 4 2022

  5. Common App Essays, Pt. 1: Intro & Overview of Prompts

  6. How to get into Rice University

COMMENTS

  1. 2021-2022 Common App essay prompts

    By Scott Anderson. February 16, 2021. The Common App essay prompts will remain the same for 2021-2022 with one exception. We will retire the seldom used option about solving a problem and replace it with the following: Reflect on something that someone has done for you that has made you happy or thankful in a surprising way.

  2. Common App announces 2024-2025 Common App essay prompts

    February 27, 2024. We are happy to announce that the Common App essay prompts will remain the same for 2024-2025. Our decision to keep these prompts unchanged is supported by past research showing that overall satisfaction with the prompts exceeded 95% across our constituent groups - students, counselors, advisors, teachers, and member colleges.

  3. NEW Common App Essay Prompts for 2021-2022

    The Common Application just announced the essay prompts for the upcoming admission cycle. One prompt has been replaced, but the other six remain the same. The new prompt is: Reflect on something that someone has done for you that has made you happy or thankful in a surprising way. How has this gr

  4. The 2021-2022 Common App Essay: How to Write a Great Essay ...

    The Common App recently released the 2021-2022 essay prompts, which are almost the same as last year's prompts, but with one BIG difference. The prompt about problem solving (formerly prompt #4) has been replaced with a prompt about gratitude and how it has motivated you.

  5. 2021-2022 Common Application Writing Prompts

    February 24, 2021. The Common App essay prompts will remain the same for 2021-2022 with one exception: there will no longer be a prompt on "solving a problem.". It will be replaced by the following: "Reflect on something that someone has done for you that has made you happy or thankful in a surprising way.

  6. PDF 2021-2022 Common App Essay Prompts

    2021-2022 Common App Essay Prompts The Common App essay prompts will remain the same for 2021-2022 with one We will retire the seldom used option about solving a problem and replace it with the following: www.commonapp.org. 5. The other "lesserknown"one-stop application is the Coalition Application (which a student can

  7. 2021 Common App essay prompts revealed

    An integral component of the Common App is an essay of 250 to 650 words that is required by most of its participating colleges. Students have a choice of seven essay topics, one of which states ...

  8. COMMON APP 2021-2022 ESSAY PROMPTS

    Below is the full set of Common App essay prompts for 2022-2023. Students choose one of the seven prompts. Responses are limited to 650 words. Some students have a background, identity, interest, or talent that is so meaningful they believe their application would be incomplete without it. If this sounds like you, then please share your story.

  9. Guide to the Common App Essays: Writing about Your Background (Prompt 1

    The 2021-22 Common Application's essay prompt 1 asks you to write about your background or identity. But what is it REALLY asking? Get the lowdown from College Essay Advisors Founder and Chief Advisor, Stacey Brook. She'll break this prompt down into its basic building blocks and offer some insider tips and strategies for picking the […]

  10. 2021-2022 Common App Essay Prompts Uncovered and Dissected (Part 1)

    In today's episode of "College Admissions Real Talk", Dr. Legatt discusses the new common app essay prompts in her first two-parter. Have a question? Text 610-222-5762. Subscribe now wherever you listen to podcasts: iTunes Libsyn YouTube Spotify iHeartRadio Transcript VO: Welcome to College Admissions Real Talk with Dr. Aviva Legatt, a podcast for students […]

  11. How to Write the Common App Essay Prompt #4, 2021-2022

    Most prompts follow a similar structure: one sentence about reflection, and a second question that asks you to explain things further. Rather than answering the second question, infuse your response to it into the story. A lot of people come to us with a common app essay that looks like this: there's a story, and then there's a 100-word ...

  12. How to Write an Outstanding Common App Essay

    This graph illustrates the choices made by over 1,500 Prompt clients. According to our data, two essays are by far the most popular — the "Background" essay (#1) and the "Accomplishment" essay (#5), with roughly 30% of our clients choosing each. Interestingly, the most popular choice according to the Common App 's own data is different.

  13. The 2021-2022 Common App Essay Prompts Are Here

    2021-2022 Common App Prompts. Here is a list of the prompts for this cycle. While they are largely unchanged, Prompt #4 is different this year (which is kind of a big deal, considering that the prompts have been the same since 2017). Prompt #1: Some students have a background, identity, interest, or talent that is so meaningful they believe ...

  14. How to Write the Common App Essay Prompt #7, 2021-2022

    How to Write the Common App Essay Prompt #7, 2021-2022. We made it! We broke down the first six prompts and we're concluding this series by diving into our favorite prompt #7. Let's get right to it: Share an essay on any topic of your choice. It can be one you've already written, one that responds to a different prompt, or one of your own ...

  15. How to Answer the Common App Essay Prompts (2020-2021)

    At Rostrum, we're committed to providing you with thorough essay guidance to assist you in navigating the Common App prompts. Check out the 2022-23 application to see what it has in store for you. Each of the 7 prompts has been decrypted for you by us.

  16. The 2020-2021 Common Application Essay Prompts Are Here

    2020-2021 Common Application Essay Prompts. Here are the essay prompts from last year, which will be used again in this upcoming application cycle. Since we have worked with these prompts extensively in the past, we can confirm that these can inspire some pretty great essays. Prompt #1: Some students have a background, identity, interest, or ...

  17. 2021-22 Common App Essay Prompts

    Balanced College Consulting. Feb 24. Feb 24 2021-22 Common App Essay Prompts. Megan Freitas. For the first time in a few years, the Common Application (used by students to apply to most private and out-of-state public schools) has changed a question when releasing their 2021-22 essay prompts. The new question pertains to kindness and gratitude ...

  18. How To Answer Common App Essay Prompts: 2022-23

    The common application essay prompts that were most popular according to common app analytics are prompt 7: the choose your own topic, prompt 5: Explain an accomplishment, and coming in third prompt 2: a setback or failure. The admission officers are finding that these prompts are usually the most common because they can be very relatable ...

  19. How to Nail the Common App Essay (and choose the best prompt for you)

    The 2020-2021 Common Application essay prompts, with tips for choosing the right one and writing a strong response. We walk you through how to write a brilliant Common App essay for 2020-2021. The Common Application allows high school seniors to apply to any of the almost 900 colleges and universities that use it.

  20. Common App Essay Prompt 2 Example and Guide 2024-2025

    Common App Essay Prompt 2 Example Topics. This prompt is begging for a story, which is one of the reasons we like it. A good failure/setback/challenge essay has a few elements: The Failure. Humility. Growth. For this prompt, we recommend going smaller regarding the incident you're writing about. Like we said, big things probably need to go in ...

  21. 2021-22 Common Application Essay Prompts: Tips, Samples

    For the 2021-22 application cycle, the Common Application essay prompts remain unchanged from the 2020-21 cycle with the exception of an all new option #4. As in the past, with the inclusion of the popular "Topic of Your Choice" option, you have the opportunity to write about anything you want to share with the folks in the admissions office.

  22. Social Media Fact Sheet

    Many Americans use social media to connect with one another, engage with news content, share information and entertain themselves. Explore the patterns and trends shaping the social media landscape. To better understand Americans' social media use, Pew Research Center surveyed 5,733 U.S. adults from May 19 to Sept. 5, 2023.

  23. ChatGPT: the latest news, controversies, and helpful tips

    You also have the option for more specific inputting requests for an essay with a specific number of paragraphs or a Wikipedia page. We got an extremely detailed result with the request "write a ...

  24. Internet & Technology

    Americans' Views of Technology Companies. Most Americans are wary of social media's role in politics and its overall impact on the country, and these concerns are ticking up among Democrats. Still, Republicans stand out on several measures, with a majority believing major technology companies are biased toward liberals. short readsApr 3, 2024.

  25. Want To Get Into The Ivy League? Here's How Long The Application

    Studying & Taking Standardized Tests: 6 months-1.5 years. Typically, students will have completed the mathematics coursework needed to take the SAT and ACT by the spring of their sophomore year ...

  26. Microsoft Azure Blog

    By Jessica Hawk Corporate Vice President, Data, AI, and Digital Applications, Product Marketing. Sharing insights on technology transformation along with important updates and resources about the data, AI, and digital application solutions that make Microsoft Azure the platform for the era of AI. Hybrid + Multicloud, Thought leadership.

  27. Applying large language models for automated essay scoring for non

    Section Prompt strategy of the study delves further into the role of prompts in the application of LLMs to AES. It explores how the choice and implementation of prompts can impact the performance ...