DDQ TUE 2021-11-11
35. JavaScript¶
35.1. Agenda¶
General Announcements
Discussion & Activity
Category |
Item |
Day |
Date |
Due |
---|---|---|---|---|
Note |
Project Work Day |
MON |
11-22 |
|
Note |
Project Work Day |
TUE |
11-23 |
|
Note |
Holiday: Thanksgiving - No Class |
THU |
11-25 |
|
Exams |
THU |
12-02 |
11:55 PM |
|
Note |
Last Day of Class |
MON |
12-06 |
|
Note |
Friday Class Schedule in Effect - No Class |
TUE |
12-07 |
|
Term Project |
MON |
12-13 |
11:55 PM |
- 1(1,2)
As explained in the Exams section of the syllabus, the final milestone of your term project (i.e., Milestone 4: Prototyping & Testing) serves as your final examination in this course. Exam 2 is, therefore, a regular exam; it is not the final exam. The final term project milestone is considered a, “take-home final exam.”
Read the abstracts for upcoming papers that will be presented. You can, of course, read the entirety of a paper, if interested, but you need to read the abstract before the paper is presented so that you can provide good feedback to the presenter. The full paper presentation schedule is available here and and upcoming paper presentations are listed near the bottom of this page.
35.2. Activity¶
35.2.1. Introduction to JavaScript¶
Alongside HTML and CSS, the JavaScript programming language is considered a core technology of the World Wide Web. According to usage statistics ascertained by W3Techs, JavaScript is used as the client-side programming language by 97.6% of all the websites (last retrieved November 10, 2021).
- JS¶
- JavaScript¶
A weakly-typed, high-level programming language that conforms to the ECMAScript® Language Specification. It also supports:
- dynamic typing
A value has a type, but a variable does not. What you can do with a variable depends on its value.
let x = 4; console.log(typeof x); // 'number' x = "hello world"; console.log(typeof x); // 'string'
- prototype-based object-orientation
No explicit classes! Each object in a JavaScript program is a modified clone of an existing object, called its prototype. There are a limited number of built-in prototype objects, and most, if not all, built-in prototype objects have
Object
set as their prototype object. Any object can be the prototype object for a new object. One major benefit of prototype-based object-orientation is the ability to polyfill, i.e., modify object prototypes to provide modern functionality on older user agents that do not natively support it.Note
The
class
syntax in ECMAScript is syntactic sugar on top of the existing prototype system.class Foo {} console.log(typeof Foo); // "function" console.log(Foo.prototype.__proto__ === Function.prototype.__proto__); // true console.log(Foo.prototype.__proto__ === Object.prototype); // true
Here is an example where we add a
makeString
method to the built-inArray
class by modifying its prototype:let list = [1, 2, 3]; console.log(list?.makeString); // undefined Array.prototype.makeString = function (start, sep, end) { return start + this.join(sep) + end; } // makeString console.log(list.makeString("(", "; ", ")")); // (1; 2; 3)
Adding functionality to built-in objects is usually discouraged, except in polyfill scenarios. Replacing existing implementations can be useful in situations where you need to optimize something for a specific platform (at the loss of portability).
- first-class functions
Functions are objects.
function log(label, ...args) { console.log(label, { content: args }); } // log
const log = function (label, ...args) { console.log(label, { content: args }); }; // log
const getLogger = function (label) { function log(...args) { console.log(this.label, { content: args }); }; log.label = label return log }; // getLogger const log_info = logger("INFO"); log_info("hello, world");
const logger = function (label) { return (...args) => console.log(label, { content: args }); }; // logger const log_info = logger("INFO"); log_info("hello, world");
const log = (label) => (...args) => console.log(label, { content: args }); log("INFO")("hello, world");
The last two examples uses an arrow function expression, a compact alternative to the traditional syntax that should look familiar to readers who have worked with lambda expression syntax in other programming languages.
- Vanilla JS¶
Vanilla JS refers to using plain JavaScript without any additional libraries. To see how Vanilla JS compares to “JS + some popular library,” refer to http://vanilla-js.com/.
35.2.1.1. Vanilla APIs¶
- Fetch API¶
A JavaScript API that provides an interface for fetching resources (including across the network).
const endpoint = "https://dog.ceo/api/breeds/image/random"; fetch(endpoint) .then(request => request.json()) .then(json => json.message) .then(url => console.log(url))
See also
35.2.1.2. Combining HTML, CSS, and JavaScript¶
Below is an example application that combines HTML, CSS, and
JavaScript to populate a dropdown list (i.e., an HTML select
element)
with dog breeds, then provide a button that a user can click to
fetch images of dogs based on the breed that is currently
selected in the dropdown. Both the list of breeds and the images
are retrieved using the DogAPI.
Warning
The example application below is hastily written. It does not explicitly handle fetch-related or HTTP-related errors, which should be important considerations when your application relies on some external service (e.g., the DogAPI). What if the service has some downtime (e.g., for maintenance, due to a crash, etc.) or the user’s internet connection is disrupted after the application loads?
The hastily-written HTML, CSS, and JavaScript code for the DogAPI example application can be found in Listing 35.6, Listing 35.7, and Listing 35.8, respectively. A version of the application is also available as a JSFiddle here.
<section id="searchArea">
<header>
<form>
<label for="breeds">Breeds:</label>
<select name="breed" id="breedList" disabled>
<!-- populate with fetch -->
</select>
<input id="fetchImagesButton" type="button" value="Fetch Images" disabled>
</form>
</header>
<div id="imageList">
<!-- populate with fetch -->
Initializing...
<noscript>Please enable JavaScript to use this applicatin.</noscript>
</div>
<footer>
<small>
Powered by HTML, CSS, JavaScript, and the
<a href="https://dog.ceo/dog-api/">Dog API</a>.
</small>
</footer>
</section>
#searchArea {
font-family: sans-serif;
display: flex;
flex-flow: column nowrap;
align-items: center;
gap: 1em;
}
#searchArea > header > form {
display: flex;
justify-content: center;
align-items: center;
gap: 0.25rem;
}
#searchArea > #imageList > img {
width: 64px;
height: 64px;
}
class RestApi {
constructor(endpoint) {
this.endpoint = endpoint;
} // constructor
okOrThrow(response) {
if (response.ok) {
return response;
} else {
throw new Error(`HTTP ${response.status}`)
} // if
} // check
fetch(...args) {
let method = args.join("/");
return fetch(this.endpoint + method)
.then(this.okOrThrow)
} // fetch
} // RestApi
const dogApi = new RestApi("https://dog.ceo/api/");
const breedList = document.querySelector("#breedList");
const imageList = document.querySelector("#imageList");
const fetchImagesButton = document.querySelector("#fetchImagesButton");
function option(value, text) {
element = document.createElement("option");
element.value = value;
element.textContent = text;
return element;
} // option
function image(src, alt) {
element = document.createElement("img");
element.src = src;
element.alt = alt;
return element;
} // image
function initBreeds(breeds) {
if (breeds.length > 0) {
breeds.forEach(breed => breedList.appendChild(option(breed, breed)));
breedList.disabled = false;
fetchImagesButton.disabled = false;
imageList.innerHTML = "Select a breed, then click the <em>Fetch Images</em> button.";
} // if
} // initBreeds
function updateImages(urls) {
if (urls.length > 0) {
imageList.innerHTML = "";
urls.forEach(url => imageList.appendChild(image(url)));
} else {
imageList.innerHTML = "No results...";
} // if
fetchImagesButton.disabled = false;
} // updateImages
function fetchImages() {
let name = breedList.options[breedList.selectedIndex].textContent;
imageList.innerHTML = "Loading...";
fetchImagesButton.disabled = true;
dogApi.fetch("breed", name, "images")
.then(response => response.json())
.then(data => data.message)
.then(urls => updateImages(urls));
} // fetchImages
dogApi.fetch("breeds", "list", "all")
.then(response => response.json())
.then(data => Object.keys(data.message))
.then(breeds => initBreeds(breeds))
fetchImagesButton.addEventListener("click", fetchImages, false);
35.2.2. Breakout Groups¶
Important
RANDOMIZE: Please move around to different tables and form a random group for this activity. Each group will be assigned a number by the instructor.
Quickly introduce yourselves to each other, if you don’t already know each other.
Pick a group representative. This person will be responsible for posting your breakout group’s response on Piazza before breakout group work ends for this activity.
Help your group representative respond to the prompts below in a followup discussion to Piazza @98.
List the names of your breakout group members.
What JavaScript-related topic is assigned to your group based on the group number that you were assigned by your instructor and the table below?
Provide a useful description of your topic that explains what it is, what it does, and how it compares to alternatives. Feel free to include code snippets, screenshots, and links to good and interesting resources. You can also share JSFiddle links, if applicable, for examples.
Has anyone in your group used the library, framework, service, or platform that is the topic of your group’s discussion? If so, what was their experience? Otherwise, do you see yourself using it in the future? Why or why not?
Look at and reply to the posts that other groups made.
35.2.4. After Class¶
Before 11:55PM on FRI 11-12, individually comment on someone else’s followup discussion in Piazza @98.
Continue reading the Design and Practicum modules, and make sure you’re aware of current assignments and their due dates.
Read the abstracts for upcoming papers that will be presented. You can, of course, read the entirety of a paper, if interested, but you need to read the abstract before the paper is presented so that you can provide good feedback to the presenter. Here is the presentation schedule for Fall 2021.
Table 35.2 Fall 2021 Paper Presentation Schedule¶ Date
Presenter
Paper
MON 11-15
Yadav, Himani
Stefanie M. Faas, Johannes Kraus, Alexander Schoenhals, and Martin Baumann. 2021. Calibrating Pedestrians’ Trust in Automated Vehicles: Does an Intent Display in an External HMI Support Trust Calibration and Safe Crossing Behavior? Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, Article 157, 1–17. DOI: 10.1145/3411764.3445738
MON 11-29
Churaman, Tanya
Wonjung Kim, Seungchul Lee, Seonghoon Kim, Sungbin Jo, Chungkuk Yoo, Inseok Hwang, Seungwoo Kang, and Junehwa Song. 2020. Dyadic Mirror: Everyday Second-person Live-view for Empathetic Reflection upon Parent-child Interaction. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. 4, 3, Article 86 (September 2020), 29 pages. DOI: 10.1145/3411815
MON 11-29
Akin, Nicky
Karan Ahuja, Deval Shah, Sujeath Pareddy, Franceska Xhakaj, Amy Ogan, Yuvraj Agarwal, and Chris Harrison. 2021. Classroom Digital Twins with Instrumentation-Free Gaze Tracking. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, Article 484, 1–9. DOI: 10.1145/3411764.3445711
TUE 11-30
Harper, Daniel
Rebecca Currano, So Yeon Park, Dylan James Moore, Kent Lyons, and David Sirkin. 2021. Little Road Driving HUD: Heads-Up Display Complexity Influences Drivers’ Perceptions of Automated Vehicles. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, Article 511, 1–15. DOI: 10.1145/3411764.3445575
TUE 11-30
Suarez, Mathew
Stephen Uzor and Per Ola Kristensson. 2021. An Exploration of Freehand Crossing Selection in Head-Mounted Augmented Reality. ACM Transactions on Computer-Human Interaction (TOCHI). 28, 5, Article 33 (October 2021), 27 pages. DOI: 10.1145/3462546
MON 12-06
Hamill, Daniel
Ziang Xiao, Michelle X. Zhou, Q. Vera Liao, Gloria Mark, Changyan Chi, Wenxi Chen, and Huahai Yang. 2020. Tell Me About Yourself: Using an AI-Powered Chatbot to Conduct Conversational Surveys with Open-ended Questions. ACM Transactions on Computer-Human Interaction (TOCHI). 27, 3, Article 15 (June 2020), 37 pages. DOI: 10.1145/3381804
MON 12-06
Wang, Yulong
Jakob Peintner, Maikol Funk Drechsler, Fabio Reway, Georg Seifert, Werner Huber, and Andreas Riener. 2021. Mixed Reality Environment for Complex Scenario Testing. In Mensch und Computer 2021 (MuC ‘21). Association for Computing Machinery, New York, NY, USA, 605–608. DOI: 10.1145/3473856.3474034
Comments
Please keep the comments polite and constructive. In addition to whatever else you want to write, please comment on:
one or more aspects that you like or think is interesting; and
one or more aspects that you think needs improvement.
As always, please be sure to provide a brief justification for each.