📊 Прогресс:

Всего вопросов: 0

Правильных: 0

Ошибок: 0

I. Listening_Lecture

Step I. Listen to the lecture and take notes.

💡 Recommendation:

  • The more facts you take down, the better.

  • Try to imagine everything you hear in detail. This will help you remember more detail.

  • Try to understand why the lecturer reads this lecture, what their purpose it.

  • Try to understand why the lecturer draws this or that example or illustration.

  • You will not see the questions  while listening. You will see one question at a time after the lecture finishes. 

  • You have 1 min 20 sec for each question. Work fast!

Click here to show/hide the audio script

Step 2. Do the test on the lecture

Questions:

1. What does the speaker mainly aim to achieve in this talk?




2. At one point, the professor likens one kind of network to a factory process. What is significant about that analogy?




3. Why does the lecturer refer to the way people scan written material when describing another type of network?




4. According to the lecture, why are loop‑based networks useful for tasks like translating sentences?




5. What can be concluded about biologically inspired networks from the talk?




6. Why does the instructor briefly mention where each architecture is applied?




7. During the lecture, the professor mentions classifying flowers. Why does he give this example?




8. What issue related to loop‑based networks does the professor mention?




9. What can be inferred about the professor when he says this: “Finally, I want to briefly mention spiking neural networks.”




10. Why does the professor begin with the most straightforward kind of network?




 

Total Questions: 0

Incorrect Answers: 0

II. Reading 

1. Step 1. Read the text below

Reading + Test Time — 18 minutes

18:00
First read the questions and options on a) Author's Purpose and b) Negative Factual Info (what is NOT mentioned). Then scan quickly for author's purpose. The logics and authors points are expressed in link phrases at the beginning, in the middle and at the end of each paragraph. Note what IS mentioned from Negative Factual Info questions and what is NOT mentioned.

Read the questions marked red under the text first!

Paragraph 1
Over the past decade, neural networks have revolutionized many areas of computing, from natural language processing to image recognition and beyond. Early successes were driven largely by a simple recipe: build bigger models, feed them more data, and let them learn patterns automatically. Researchers observed “scaling laws,” showing that performance improved predictably as models grew in size and were trained on larger data sets. This approach led to well‑known systems like GPT‑4, LLaMA and other so‑called large language models. However, scaling up is not free. It requires enormous amounts of computational power, electricity and data. As a result, the field has begun to ask whether blindly making networks larger is sustainable or even effective. Understanding the limitations of scaling and exploring new types of neural architectures has become one of the central challenges for researchers and engineers today.

Paragraph 2
One of the reasons scaling can stall is rooted in basic mathematics. Modern language models rely on the transformer architecture, which uses multiple layers of matrix operations to process tokens of text. Theoreticians have shown that as the context size grows, the noise in these hidden representations becomes less predictable. In other words, adding more parameters and data eventually yields diminishing returns because random fluctuations start to overwhelm meaningful patterns. Additionally, there is a trade‑off between the bias of a model and its variance: scaling reduces some types of error but introduces others. Scholars have formulated these trade‑offs in terms of a signal‑to‑noise ratio. When that ratio crosses a critical threshold, new capabilities seem to “emerge,” but beyond that point further growth does not help. In the language of engineers, current models may be nearing the top of the S‑curve for performance. As a result, larger networks do not necessarily produce proportionally better results, despite the massive costs required to train them. Instead, researchers are looking for ways to rethink how networks are built and used.

Paragraph 3
Economic factors also push developers away from simple scaling. Training very large models can cost millions of dollars and takes weeks or months on specialized hardware, but deploying them at scale can be even more expensive. Interestingly, the cost of running a trained model—the inferencing cost—has fallen dramatically in recent years. Thanks to software optimization, cheaper graphics processors and new tuning techniques, the cost per million tokens produced by some models has dropped from tens of dollars to just a few cents. Lower costs open the door to a new paradigm: rather than focusing solely on training bigger networks, researchers can invest computational resources during inference, allowing the model to “think” longer by evaluating multiple possible answers or performing a chain‑of‑thought. At the same time, there is a growing recognition that different tasks require different tools. Specialized models designed for narrow domains, often called small foundational models, and tiny models that can run on smartphones or other edge devices, are becoming more common. These models are easier to train, cheaper to deploy and may avoid some of the pitfalls of giant systems.

Paragraph 4
The realization that bigger isn’t always better has spurred innovation in neural network design. One important line of work focuses on diffusion models. Rather than predicting words one by one, diffusion models start from random noise and gradually refine their output over many steps. Originally popular for image generation, diffusion models have been adapted for text and video. Apple researchers recently introduced Matryoshka Diffusion Models, which nest smaller diffusers inside larger ones. By jointly training multiple resolutions of a scene, these models can generate high‑resolution images and videos efficiently. Other groups are exploring hybrid architectures that combine the strengths of language models and diffusion. For instance, the LanDiff model first compresses a scene into a sequence of symbolic “tokens” using a language model, then uses a diffusion process to add perceptual detail, producing coherent and visually rich videos. These examples illustrate how new architectures attempt to preserve quality while reducing the need for ever‑larger parameter counts.

Paragraph 5
Another promising direction mixes neural learning with explicit reasoning. Traditional “connectionist” AI excels at pattern recognition, but struggles with logical rules and causal relationships. To address this, researchers are developing neuro‑symbolic and causal AI systems that integrate deterministic logic with probabilistic neural components. Such systems can impose constraints on the model’s behavior, explaining its decisions in human‑understandable terms, and reducing undesirable outputs like hallucinations. There is also interest in building agentic networks that can plan actions rather than just generating text. Large Action Models (LAMs), for example, attempt to translate natural language instructions into executable tasks, such as booking a flight or managing a calendar. Similarly, Large Concept Models (LCMs) operate at the level of whole sentences or ideas, rather than individual words, allowing them to summarize or expand text more efficiently across multiple languages. These innovations point to a future where networks are not only larger or more powerful, but also more modular, interpretable and responsive to human intentions.

Paragraph 6
Finally, scientists are turning to biology for inspiration in solving the scaling problem. Neuromorphic computing seeks to replicate how human brains work by building hardware that processes information through spikes—brief electrical pulses—rather than continuous numbers. Spiking neural networks communicate sparsely, consuming far less energy than traditional artificial networks. Recent research has shown that these networks can now be trained using gradients, the same basic method used for other deep learning models. Coupled with advances in digital neuromorphic chips and in‑memory computing, spiking networks could power low‑energy devices like smart watches or autonomous sensors. Alongside this, studies of cortical organization suggest that the brain’s sparsely connected, recurrent architecture may be more efficient for certain tasks than the fully connected layers used in conventional AI. Researchers have found that networks mimicking these sparse structures can learn more rapidly and represent information more robustly when data or computation is limited. The exploration of spiking systems, sparse connectivity and brain‑like architectures indicates that solving the scaling problem may require not only algorithmic innovations but also a reimagining of the hardware on which neural networks run. Taken together, these developments suggest a future in which artificial intelligence is not just bigger, but smarter, more efficient and more diverse in its underlying designs.

 

Questions:

1. Why does the author mention that the cost per million tokens for running a language model has fallen dramatically to a few cents? [Question Type: Author’s Purpose]




Click here to show/hide the explanation

2. According to paragraph 2, which of the following is NOT identified as a factor limiting the effectiveness of simply scaling up neural networks? [Question Type: Negative Factual Information]




Click here to show/hide the explanation

3. According to paragraph 3, how does the recent drop in the cost of running trained models influence research priorities? [Question Type: Detail]




Click here to show/hide the explanation

4. According to paragraph 4, what advantage do nested diffusion models and hybrid architectures offer? [Question Type: Detail]




Click here to show/hide the explanation

5. Which of the sentences below best expresses the essential information in the highlighted sentence in paragraph 6? Incorrect choices change the meaning in important ways or leave out essential information. [Question Type: Paraphrase]




Click here to show/hide the explanation

6. The word “modular” in paragraph 5 is closest in meaning to: [Question Type: Vocabulary]




Click here to show/hide the explanation

7. According to paragraph 5, which of the following is NOT one of the goals of integrating deterministic logic with neural learning? [Question Type: Negative Factual Information]




Click here to show/hide the explanation

8. Which paragraph discusses the trade‑off between bias and variance and introduces the concept of a signal‑to‑noise ratio? [Question Type: Detail]




Click here to show/hide the explanation

9. In the paragraph below, there is a missing sentence. Look at the paragraph and indicate (A, B, C, or D) where the following sentence could be added to the passage.
This means that simply increasing the number of layers will not always result in better performance. [Question Type: Sentence Insertion]

One of the reasons scaling can stall is rooted in basic mathematics. (A) Modern language models rely on the transformer architecture, which uses multiple layers of matrix operations to process tokens of text. (B) Theoreticians have shown that as the context size grows, the noise in these hidden representations becomes less predictable. (C) In other words, adding more parameters and data eventually yields diminishing returns because random fluctuations start to overwhelm meaningful patterns. (D) Additionally, there is a trade‑off between the bias of a model and its variance, and scholars have formulated these trade‑offs in terms of a signal‑to‑noise ratio.




Click here to show/hide the explanation

10. Directions: An introductory sentence for a brief summary of the passage is provided below. Complete the summary by dragging the letters of the 3 answer choices that express the most important ideas into the box. [Question Type: Summary]

Recent research on neural networks suggests that scaling up models is not always effective.

A. Scaling laws show that noise and diminishing returns limit the benefits of simply adding more parameters; new mathematical frameworks like signal-to-noise ratio illustrate these limitations.
B. Lower inferencing costs and the development of specialized models have shifted focus toward small and domain-specific architectures that may be deployed on edge devices.
C. Diffusion models, hybrid architectures, and concept-level models offer efficient alternatives to giant transformer-based networks, enabling high-resolution generation and better semantic control.
D. All new architectures replace the need for traditional neural networks, making future research on transformers unnecessary.
E. Spiking neural networks, sparse connectivity and neuromorphic hardware suggest that solving scaling challenges involves changes in both algorithms and hardware design.
F. —
Summary
 

Click here to show/hide the explanation

---

 

Total Questions: 0

Incorrect Answers: 0

II. Writing

2. Integrated writing. 

Step1. Read the text below.

Reading Time — 3 minutes

 
Read & take down 3 main ideas: 3:00

Reading Passage:

In recent years, the rapid expansion of neural‑network‑based artificial intelligence has drawn criticism from ethicists and policymakers. A central concern is that many modern AI systems operate as “black boxes.” Their internal logic is so complex that even their creators cannot fully explain how they reach a decision. This opacity has had real‑world consequences. Investigations have shown that proprietary risk‑assessment tools like the COMPAS algorithm used in parts of the U.S. legal system produced racially biased scores and yet defendants were given no insight into how those scores were calculatedts2.tech. Similarly, AI‑driven hiring systems trained on historical data have amplified gender and racial stereotypes because nobody could audit the hidden criteria the model learned. Critics argue that without meaningful transparency and oversight, such systems threaten civil rights and due process.

A second critique is that deploying opaque AI in critical infrastructures undermines accountability. When a self‑driving car’s vision system misclassifies a pedestrian, it may be impossible for investigators to reconstruct the chain of reasoning that led to the crashts2.tech. In other high‑stakes contexts—such as health‑care triage, loan approvals and predictive policing—secret algorithms make decisions that profoundly affect people’s lives. Opponents contend that entrusting these choices to inscrutable machines erodes public trust and prevents injured parties from challenging a decision.

Finally, some experts argue that the deep‑learning paradigm is fundamentally misaligned with democratic values. Because modern neural networks require enormous amounts of data and computing power, only a handful of corporations can afford to develop them. These companies often refuse to disclose training data or model weights, citing trade secrecy. Thus, not only are the models themselves black boxes, but the context of their creation is opaque. Critics fear that, without strict regulation, the AI revolution will be driven by private interests at the expense of fairness and human rights. They urge regulators to limit the use of black‑box systems in sensitive domains until they can be rendered transparent and accountable..

Step 2. Listen to part of a lecture below and take notes.

If the lecture is too hard to understand, click here to show/hide its script

Important!: Write out the three main ideas and their elaborations/illustrations/details that the lecturer provides. You should connect the points made in the lecture to the points made in the reading! When you hear the question, click to show the passage and question and begin your response.

Click here to show/hide the question

Step 3. Write your answer.

Writing time - 16 min.

💡 Recommendation:

  • Aim to finish at least 1–2 minutes before the timer runs out to check for grammar or missing content.

Tip: Write at least 300 words

 
Write: 16:00

Click here to show/hide the template

Email отправителя [Отправляя свои личные данные в любом поле на этом сайте, вы соглашаетесь с политикой обработки персональных данных, которая осуществляется в соответсвии с законодательством РФ.] *:
Ваше имя *:
Write your answer[s] here.= Введите письменный ответ здесь. Озаглавьте свою работу по теме задания. На пример: Essay on the topic " To be or not to be" или " Звуковой файл по видео о том как готовить пиццу" или "Звуковой файл со сравнением картинок о видах домашних животных" But better write in English) After all, you are learning to use it;)ESL tutor Tatyana Dolina webenglish.org *:
Прикрепите фото, скриншот, звуковой файл или др.файл [jpeg, png, pdf, doc, docx, txt, mp3]:
Докажите, что вы не робот. *:

 

2. Independent writing

Reading time -2 minutes, writing time-8 minutes

Step1. Read the text

Professor Miguel’ Post:

Dear students,

Over the past year we have all been hearing about the growing influence of artificial intelligence in every corner of society, and higher education is no exception. Tools ranging from grammar‑checkers to large language models can draft text, summarize articles, generate code or even suggest research topics. This raises an important question for us as a university community: Should we allow students to use AI tools in their academic work? If the answer is yes, where should we draw the line between using AI thoughtfully to aid understanding and simply letting the software do the thinking for us?

On one hand, advocates argue that AI can be a powerful assistant, helping you sift through large amounts of information, spot connections you might otherwise miss and refine your writing. In this sense, AI could be comparable to using a calculator in mathematics or a search engine for research—tools that enhance, rather than replace, human judgement. On the other hand, there are legitimate concerns that reliance on AI could erode critical‑thinking skills, introduce subtle inaccuracies or biases from the training data and tempt some to submit AI‑generated text as their own work.

I would like to hear your thoughts and experiences. Have you used AI tools in your coursework or research? If so, how do you ensure that these tools support your learning instead of simply substituting for it? What guidelines do you think universities should adopt to strike a balance between embracing innovation and maintaining academic integrity?

Looking forward to a lively and thoughtful discussion.

.

 Student 1: Luise

I think universities should allow the use of AI tools, but only as an aid rather than a shortcut. When I use a language model to generate ideas or summaries, I treat its output as a starting point. I still cross‑check the information, rewrite it in my own words, and add my own analysis. To me, the line between helpful and harmful is whether the AI is prompting you to think deeper or simply replacing your thinking. If we learn to use these tools responsibly, they could actually improve our research skills instead of undermining them.

 Student 2: Lucas

I’m more skeptical about AI in academic work, because the temptation to let it do the heavy lifting is strong. There’s a difference between using a spell‑checker and pasting a full paragraph of AI‑generated text into an essay. Copying without understanding makes it harder to develop original ideas, and the output can be wrong or biased. I would feel more comfortable if our university set clear guidelines on when and how AI is acceptable, such as for brainstorming or checking grammar. Until then, I prefer to rely on my own reading and writing process.

Writing Question:

Write a response (about 120 words) stating your opinion on the issue. Be sure to:

  • State your own view clearly. It brings you more points if your opinion is different from those of the students.
  • Refer to the opinions of both Luise and Lucas
  • Use specific reasons or examples

Step 2. Write a response 

Tip: Write at least 120 words

Important: Address both students' views!

 
Write: 8:00
Email отправителя [Отправляя свои личные данные в любом поле на этом сайте, вы соглашаетесь с политикой обработки персональных данных, которая осуществляется в соответсвии с законодательством РФ.] *:
Ваше имя *:
Write your answer[s] here.= Введите письменный ответ здесь. Озаглавьте свою работу по теме задания. На пример: Essay on the topic " To be or not to be" или " Звуковой файл по видео о том как готовить пиццу" или "Звуковой файл со сравнением картинок о видах домашних животных" But better write in English) After all, you are learning to use it;)ESL tutor Tatyana Dolina webenglish.org *:
Прикрепите фото, скриншот, звуковой файл или др.файл [jpeg, png, pdf, doc, docx, txt, mp3]:
Докажите, что вы не робот. *: