DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
Regarding FIGS. 3, 4 and 6-11, 37 CFR 1.84(a)(1), stated in part, normally requires black and white drawings. India ink, or its equivalent that secures solid black lines, must be used for drawings. In the present case, FIGS. 3, 4 and 6-11 have very faint text and lines. Therefore, the failure to use solid black text and lines renders FIGS. 3, 4 and 6-11 from complying with 37 CFR 1.84(a)(1).
Regarding FIGS. 4 and 6-11, 37 CFR 1.84(b)(1), stated in part, indicates that black and white photographs, including photocopies of photographs and clip art, are not ordinarily permitted in utility and design patent applications. The Office will accept photographs in utility and design patent applications, however, if photographs are the only practicable medium for illustrating the claimed invention. The photographs must be of sufficient quality so that all details in the photographs are reproducible in the printed patent. In the present case, FIGS. 4 and 6-11 contain screenshots/clip art that are not of sufficient quality so that all details in the screenshots are reproducible in the printed patent. Therefore, the use of screenshots/clip art lacking sufficient reproducible quality prevents FIGS. 4 and 6-11 from complying with 37 CFR 1.84(b)(1).
Claim Rejections - 35 USC § 101
35 U.S.C. § 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claim 1 is directed to “a method” (i.e. “a process”), claim 9 is directed to “a non-transitory medium” (i.e. “a machine”), and claim 17 is directed to “a system” (i.e. “a machine”), hence the claims are directed to one of the four statutory categories (i.e. process, machine, manufacture, or composition of matter). In other words, Step 1 of the subject-matter eligibility analysis is “Yes.”
However, the claims are drawn to an abstract idea of “providing information,” either in the form of “certain methods of organizing human activity,” in terms of managing personal behavior or relationships or interactions between people (including social activities, teaching and following rules or instructions), or reasonably in the form of “mental processes,” in terms of processes that can be performed in the human mind (including an observation, evaluation, judgement or opinion). Regardless, the claims are reasonably understood as either “certain methods of organizing human activity” or “mental processes,” which require the following limitations:
Per claim 1:
“displaying user readable text; and in response to selection of a word or a phrase by a user, providing information about the selected word or phrase.”
Per claim 9:
“displaying user readable text; and
in response to selection of a word or a phrase by a user, providing information about the selected word or phrase.”
Per claim 17:
“a first component configured to display user readable text on the display; and
a second component configured to providing information about the selected word or phrase in response to selection of a word or a phrase by a user.”
These limitations simply describe a process of data gathering and manipulation, which is analogous to “a process of gathering and analyzing information of a specified content, then displaying the results, [without] any particular assertedly inventive technology for performing those functions.” (i.e. Elec. Power Grp., LLC v. Alstom S.A., 830 F.3d 1350, 1354 (Fed. Cir. 2016)). Hence, these limitations are akin to an abstract idea which has been identified among non-limiting examples to be an abstract idea. In other words, Step 2A, Prong 1 of the subject-matter eligibility analysis is “Yes.”
Furthermore, the applicants claimed elements of “a computer system comprising at least one processor and a display,” and “generative artificial intelligence (Al),” are merely claimed to generally link the use of a judicial exception (e.g., pre-solution activity of data gathering and post-solution activity of presenting data) to (1) a particular technological environment or (2) field of use, per MPEP §2106.05(h); and are applying the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, per MPEP §2106.05(f). In other words, the claimed “providing information,” is not providing a practical application, thus Step 2A, Prong 2 of the subject-matter eligibility analysis is “No.”
Likewise, the claims do not include additional elements that either alone or in combination are sufficient to amount to significantly more than the judicial exception because to the extent that, e.g. “a computer system comprising at least one processor and a display,” and “generative artificial intelligence (Al),” are claimed, these are generic, well-known, and conventional data gather computing elements. As evidence that these are generic, well-known, and a conventional data gathering computing elements (or an equivalent term), as a commercially available product, or in a manner that indicates that the additional elements are sufficiently well-known, the Applicant’s specification discloses these in a manner that indicates that the additional elements are sufficiently well-known that the specification does not need to describe the particulars of such additional elements to satisfy 35 U.S.C. § 112(a), per MPEP § 2106.07(a) III (a). As such, this satisfies the Examiner’s evidentiary burden requirement per the Berkheimer memo.
Specifically, the Applicant’s claimed “a computer system comprising at least one processor and a display,” is not sufficiently described in the written description of the specification as originally filed and is reasonably understood to be any form of a computer having generic, routine and conventional components. As such, this element is reasonably interpreted as a generic computer which provides no details of anything beyond ubiquitous standard off-the-shelf equipment.
Likewise, the Applicant’s claimed “generative artificial intelligence (Al),” is also not sufficiently described in the written description of the specification as originally filed. At a minimum, the element is best described in para. [0076] of the Applicant’s written description as originally filed, provides the following:
“[0076]: AI models are built from data collected to predict understanding, satisfaction, interest.
[0076]: “At any point of reading a definition, the user can choose to start a discussion with an automated ChatGPT-like chatbot or a human expert (701, 702, 703, 704, 705, 706 and 707).”
As such, the element is broadly described as a generic computer software component that is commonly and commercially available. Thus it is reasonably being interpreted as a generic, well-known, and conventional data computing element.
Therefore, the Applicant’s own specification discloses ubiquitous standard equipment that is generic, routine, conventional, and/or commercially available and does not provide anything significantly more. Thus, Step 2B, of the subject-matter eligibility analysis is “No.”
In addition, dependent claims 2-8, 10-16 and 18-20 do not provide a practical application and are insufficient to amount to significantly more than the judicial exception. As such, dependent claims 2-8, 10-16 and 18-20 are also rejected under 35 U.S.C. § 101, based on their respective dependencies to claim 1, 9 or 17. Therefore, claims 1-20 are rejected under 35 U.S.C. § 101 as being directed to non-statutory subject-matter.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Tholfsen, et al., (hereinafter referred to as “Tholfsen,” US 2024/0321133).
Regarding claim 1, and substantially similar limitations in claim 9 and 17, Tholfsen discloses displaying, by a computer system comprising at least one processor and a display (see para. [0041] Teacher computing device 110 and student computing device 140 are representative of computing devices, such as laptops or desktop computers, mobile computing devices, such as tablet computers or cellular phones, and any other suitable devices of which computing device 901 in FIG. 9 is broadly representative), user readable text (see para. [0071]: FIG. 5B continues user experience 500 illustrates a user interface generated and displayed by the application when the user selects the Reading Progress assignment type in drop-down menu 501. As illustrated in FIG. 5B, the teacher or other user can import a Word document or PDF file containing reading text, generate a custom reading passage based on select words and/or reading parameters, or browse a sample library of reading texts); and in response to selection of a word or a phrase by a user, providing generative artificial intelligence (Al) information about the selected word or phrase (see para. [0020]: The application service is directed to automatically generating highly targeted reading passages, such as for students in a class, using generative artificial intelligence (AI). The passages may be generated to allow practicing of words that student(s) have been known to have trouble with, to practice certain phonics rules, etc.; see para. [0071]: FIG. 5B continues user experience 500 illustrates a user interface generated and displayed by the application when the user selects the Reading Progress assignment type in drop-down menu 501. As illustrated in FIG. 5B, the teacher or other user can import a Word document or PDF file containing reading text, generate a custom reading passage based on select words and/or reading parameters, or browse a sample library of reading texts.).
Regarding claim 2, and substantially similar limitations in claim 10 and 18, Tholfsen discloses wherein the selection can be done with one of a voice command, a tap, a touch, a stare, or by using a computer mouse (see para. [0082]: FIG. 8B illustrates user experience 800 when the student user selects “Single passage.” User experience 800 presents trouble words identified by the reading application for that particular user. In an implementation, the student's trouble words were identified by a speech engine or insights engine based on audio data captured of the student reading one or more selected passages. In FIG. 8B, the student can select a topic and difficulty level for the passage. Upon clicking “Generate,” the reading application presents the custom reading passage configured according to a prompt generated by the reading application.).
Regarding claim 3, and substantially similar limitations in claim 11 and 19, Tholfsen discloses, wherein the generative Al information is provided via a pop-up text box (see para. [0077]: When the student completes the reading, the speech engine generates pop-up window 601 which presents an analysis of the student's reading ability based on the captured audio data; see para. [0079] In FIG. 7B, the reading instruction application displays pop-up window 701 in user experience 700 where the teacher can review a custom reading passage generated by the LLM based on prompt including the selected phonics rules or phonemes).
Regarding claim 4, and substantially similar limitations in claim 12, Tholfsen discloses wherein the pop-up text box comprises a hyperlink to a website on the world wide web (see FIG. 7B, “Learn more about AI driven language model”).
Regarding claim 5, and substantially similar limitations in claim 13 and 20, Tholfsen discloses wherein the pop-up text box is manually editable (see para. [0080]: Continuing with FIG. 7C, the reading instruction application displays pop-window 703 where the teacher can select particular words or trouble words to incorporate in the custom reading passage review based on the selected phonics rules or phonemes and review the custom reading passage generated by the LLM based on a prompt including the selected words).
Regarding claim 6, and substantially similar limitations in claim 14, Tholfsen discloses wherein the generative Al information is provided via one of an audio message, a video message, or an image (see para. [0031]: Multimodal models are a class of foundation model which leverages the pre-trained knowledge and representation abilities of foundation models to extend their capabilities to handle multimodal data, such as text, image, video, and audio data. Multimodal models may leverage techniques like attention mechanisms and shared encoders to fuse information from different modalities and create joint representations. Learning joint representations across different modalities enables multimodal models to generate multimodal outputs that are coherent, diverse, expressive, and contextually rich. For example, multimodal models can generate a caption or textual description of the given image, for example, by using an image encoder to extract visual features, then feeding the visual features to a language decoder to generate a descriptive caption. Similarly, multimodal models can generate an image based on a text description (or, in some scenarios, a spoken description transcribed by a speech-to-text engine). Multimodal models work in a similar fashion with video-generating a text description of the video or generating video based on a text description; see para. [0035]: In some implementations, the technology disclosed herein incorporates a foundation model service, such as a multimodal model service hosting a multimodal model, to teach a variety of subjects beyond reading instruction, such as subjects in the social sciences (e.g., history, geography), scientific subjects (e.g., biology, chemistry, astronomy), math subjects (e.g., geometry, statistics, game theory), or subjects in the visual arts, (e.g., fine art appreciation, photography, art history). In an implementation, prompts including selected text and/or imagery can be fed into a multimodal learning instruction environment to generate customized text or images for instructional activities such as identifying a protein conformation, graphing or charting data for analysis, map reading, geometric proofs, and so on).
Regarding claim 7, and substantially similar limitations in claim 15, Tholfsen discloses further comprising: upon determining that the generative Al information is inaccurate or inappropriate providing a different generative Al information (see para. [0038]: For example, the disclosed technology streamlines the interaction between the user (e.g., a teacher) and the application service by generating prompts which keep the LLM on task and reduce the incidence of erroneous, inappropriate, or off-target replies; see para. [0054] The prompt generated by the application service based on the trouble words may also instruct the foundation model service to avoid certain types of content, such as content, language, or subject matter which may be personally or culturally insensitive or inappropriate; see para. [0082]: FIG. 8B illustrates user experience 800 when the student user selects “Single passage.” User experience 800 presents trouble words identified by the reading application for that particular user. In an implementation, the student's trouble words were identified by a speech engine or insights engine based on audio data captured of the student reading one or more selected passages. In FIG. 8B, the student can select a topic and difficulty level for the passage. Upon clicking “Generate,” the reading application presents the custom reading passage configured according to a prompt generated by the reading application. The prompt generated by the reading application includes the trouble words presented in user experience 800 along with the student-selected parameters, but may include other tasks, instructions, or rules to ensure that the generated content is appropriate (e.g., contains no offensive content). For example, the foundation model service may be tasked with also incorporating words with phonemes or based on phonics rules with which the student has had difficulty as determined by the speech engine. In an implementation, the reading application evaluates the custom reading passage received from the foundation model service for appropriate content and difficulty prior to presenting the passage in the user interface.).
Regarding claim 8, and substantially similar limitations in claim 16, Tholfsen discloses further comprising: in response to selection of the word or the phrase by a user for a second time, providing different generative Al information about the selected word or phrase from the previously provided generative Al information; wherein, the different information is generated by a machine learning model (MLL); wherein, the MILL is trained by using captured data about the user; and wherein, the different information is generated by the MLL in real time in response to selection of the word or the phrase by the user for the second time (see claim 6: to evaluate the reading passage according to one or more of the parameters and submit a second prompt to the foundation model service to generate a second reading passage which comprises a modification to the reading passage).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT P. BULLINGTON whose telephone number is (313) 446-4841. The examiner can normally be reached on Monday through Friday from 8 A.M. to 4 P.M. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Peter Vasat, can be reached on (571) 270-7625. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://portal.uspto.gov/external/portal. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at (866) 217-9197 (toll-free).
/Robert P Bullington, Esq./ Primary Examiner, Art Unit 3715