DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-19 are pending.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Independent claims 1 and 17 recite mental processes of selecting an option, triggering a learning option, receiving input data and providing output data in view of the received input data. Each of the processes could be done mentally based on the provided information. Mental processes are directed to one of the abstract ideas groups as set forth by Prong One in Step 2A of the 2019 Patent Subject Matter Eligibility Guidance.
In addition, the limitations of displaying an option object, displaying a question-and-answer window, and displaying a templated answer as claimed are directed to insignificant extra-solution activities at Step 2A Prong Two, and also would be well-understood, routine, and conventional at Step 2B. The displaying does not provide any integration into a practical application. Rather, the limitations appear to be mere data outputting and applying the abstract idea.
Additionally, the elements (e.g. option object, option, user image, learning option) and the additional elements (e.g. course information) are directed to types of information, which do not impose a meaningful limit on the judicial exception, such that the claims are more than a drafting effort design to monopolize exception, because the claimed steps could be performed in a same manner to achieve the same outcome with other types of information other than the ones being used in the claims.
Hence, the claims do not include elements or the combination of the elements are sufficient to amount to significantly more than the judicial exception and fail to integrate the judicial exception into practical application according to Prong Two in Step 2A of the 2019 Patent Subject Matter Eligibility Guidance because the claimed elements or their combination do not impose any meaningful limits on practicing the abstract idea.
Further, in view of Step 2B of the 2019 Patent Subject Matter Eligibility Guidance, it is determined that the computing elements (such as a storage medium, processing device, display device) in the claims amount to no more than usage of a generic computing system having a generic computing components, which fails to provide an inventive concept or significantly more than abstract idea because the elements do not necessary improve the functional of a computing system or an improvement to a technical field since network computing is well known.
Dependent claims 2-3 and 18 further recite in additional mental processes of obtaining information according to user instruction, selecting a selected template, inputting the selected to a machine learning model to generate information, and selecting a selected template. The claims also recite insignificant extra-solution activities of generate output according to the selected information. The claims additionally recite different data elements (user instruction, input data, question information, course information, templated question) that are being manipulated. Nether the mental processes, insignificant extra-solution activities nor the different data elements provide any practical outcome or result led to any practical outcome, nor improve the functional of a computer or an improvement to a technical field.
The selecting the machine learning mode as recited in claim 4, performing a natural language processing and filling the selected templated as recited in claim 5-6, inputting data elements of templated question and restrictive condition as recited in claim 8, receiving user feedback and updating the machine learning model as recited in claim 10-11, obtaining different type of data information as recited in claims 12, determining a classification and adjust a weight and generate a test paper as recited in claim 13 are all directed to mental processes involving gathering and evaluating different types of data. Such processes do not provide any practical outcome or result lead to any practical outcome, nor improve the functional of a computer or an improvement to a technical field.
In addition, the additional elements--such as keyword recited in claims 5-6, restrictive condition recited in claim 8, text length recited in claim 9, feedback recited in claims 10-11, second user instruction recited in claim 12, test paper recited in claims 13-14, subject & grade recited in claim 15, and virtual object recited in claims 16-20-- directed to types of data information being manipulated with the mental processes and the insignificant extra-solution activities. These types of data information do not impose a meaningful limit on the judicial exception, such that the claims are more than a drafting effort design to monopolize exception, because the claimed steps could be performed in a same manner to achieve the same outcome with other types of information other than the ones being used in the claims.
Additionally, the additional elements--such as the machine learning model(s) recited in claims 2-4 & 7 & 11, cloud server & second storage medium recited in claims 3-6 & 8 & 8, graphical user interface recited in claims 13 & 16-19-- amount to no more than usage of a generic computing system having a generic computing components performing generic computing functions that implement the mental processes and the insignificant extra-solution activities. These additional elements fail to provide an inventive concept or significantly more than abstract idea because the elements do not necessary improve the functional of a computing system or an improvement to a technical field since network computing is well known.
Thus, for at least the reasonings above, the claims are not patent eligible.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-9, 12, 15-19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Yang et al (Pub No. US 2025/0181939, hereinafter Yang).
With respect to claim 1, Yang discloses a learning system (abstract, Fig 1), comprising:
a host ([0021-0022], Fig 1: a host computer, such as but not limited to endpoint device 102), comprising:
a first storage medium ([0021], Fig 1 & 5), configured to store at least one program, wherein the at least one program comprises a learning program (a learning program is merely a program or application; [0022-0023]: at least one program is being stored for learning, which corresponds to software of a learning video as further described in [0061]);
a first processing device, electrically connected to the first storage medium, wherein the first processing device is configured to execute the learning program ([0021-0023], Fig 1 & 5: a processing device, such as but not limited to a processor, that connects to the storage medium and is configured to execute the learning program as further described in [0056] & [0061]); and
a display device ([0022], Fig 5: a display represented a display processor), electrically connected to the first processing device, and configured to display a graphical user interface according to the executed learning program ([0023], Fig 3A-3H: display a graphical user interface GUI according to the executed learning program of a video), wherein the graphical user interface is configured to display an option object ([0027], Fig 2-3H: the GUI displays at least one option object for a learning video), the option object comprises at least one option, and the at least one option is associated with course information ([0027], Fig 2-3H: the option object includes at least one option for user selection that is associated with course information of a learning video of a particular course), wherein
in response to one of the at least one option being selected, the graphical user interface is configured to display a user image, wherein the user image comprises at least one learning option ([0027-0028], Fig 3A-4: in response to an option selected by a user, display a user image of a learning video with at least learning option represented by one or more user selections, as further described in [0030-0031]),
in response to one of the at least one learning option being triggered, the graphical user interface is configured to display a question-and-answer window, wherein the question-and- answer window is configured to receive and display input data ([0008], Fig 2-4: in response to an user selection that triggers a learning option, display a question and answer window that enables user inputs a question, e.g. chat window, as further described in [0038-0039]), and
in response to receiving the input data, the question-and-answer window of the graphical user interface is configured to display a templated answer corresponding to the input data and associated with the course information ([0008-0010], Fig 2-4: in response to receive an user input data, the question and answer window is configured to display a templated answer, which is merely an answer relevant to the user input and the learning course represented by the video, as further described in [0038-0043]).
With respect to claim 17, Yang discloses an execution method of a learning system (abstract), comprising:
displaying an option object through a graphical user interface of a display device ([0027], Fig 2-3H: the GUI displays at least one option object for a learning video), wherein the option object comprises at least one option, and the at least one option is associated with course information ([0027], Fig 2-3H: the option object includes at least one option for user selection that is associated with course information of a learning video of a particular course);
in response to one of the at least one option being selected, the graphical user interface
being configured to display a user image, wherein the user image comprises at least one learning option ([0027-0028], Fig 3A-4: in response to an option selected by a user, display a user image of a learning video with at least learning option represented by one or more user selections, as further described in [0030-0031]);
in response to one of the at least one learning option being triggered, the graphical user interface displaying a question-and-answer window, wherein the question-and-answer window is configured to receive and display input data ([0008], Fig 2-4: in response to an user selection that triggers a learning option, display a question and answer window that enables user inputs a question, , e.g. chat window, as further described in [0038-0039]); and
in response to receiving the input data, the question-and-answer window of the graphical user interface being configured to display a templated answer corresponding to the input data and associated with the course information ([0008-0010], Fig 2-4: in response to receive an user input data, the question and answer window is configured to display a templated answer, which is merely an answer relevant to the user input and the learning course represented by the video, as further described in [0038-0043]).
With respect to claims 2 and 18, Yang further discloses wherein the first storage medium is adapted to store a machine learning model, a plurality of question templates corresponding to the machine learning model and a plurality of answer templates corresponding to the machine learning model ([0027-0028]: storing at least one learning model representing the machine learning model, and question templates and answer template for different types of questions and answers, as further described in [0039-0043]), and the first processing device is configured to execute:
in response to receiving a user instruction associated with the input data, obtaining question information and the course information according to the user instruction, wherein the course information matches the machine learning model ([0008-0011], Fig 3A-4: in response to receive user input instruction, obtain the question and course information according the user input instruction via contextual information as further described in [0039-0040]; wherein the course information matches the machine learning model is directed to non-functional descriptive material, and the course information of the video machine learning model);
selecting a selected question template from the plurality of question templates according to the course information, and generating a templated question according to the selected question template and the question information ([0008-0011], Fig 3A-4: select a template correspond to a selected question template, which is merely a template via library, and generate answer for the templated question as templated question is merely a question according to a template and question, e.g. natural language template questioning, as further described in [0034-0037]);
inputting the templated question into the machine learning model to generate answer information ([0028-0034], Fig 2-4: input the templated question into a machine learning model to generate information for the answer in response to the input question via chatting); and
selecting a selected answer template from the plurality of answer templates according to the course information, and generating the templated answer according to the selected answer template and the answer information ([0008-0011], Fig 2-4: select a template for the answer and generate a formatted answer according to the format and information of the answer via contextual information [0061-0063]).
With respect to claim 3, Yang further discloses a cloud server configured to be communicatively connected to the host, the cloud server comprises a second storage medium and a second processing device, the second processing device is electrically connected to the second storage medium, the second storage medium is adapted to store a machine learning model, a plurality of question templates corresponding to the machine learning model and a plurality of answer templates corresponding to the machine learning model, and the second processing device ([0021-0023], Fig 1: the system includes a server connect to a host device, the server comprise storage medium and processing device adopted to store a machine learning model, templated question and answer templated to provide personalized learning experience for user as further described in [0028-0029] & [0036-0039]) is configured to execute:
in response to receiving a user instruction associated with the input data, obtaining question information and the course information according to the user instruction, wherein the course information matches the machine learning model([0008-0011], Fig 3A-4: in response to receive user input instruction, obtain the question and course information according the user input instruction via contextual information as further described in [0039-0040]; wherein the course information matches the machine learning model is directed to non-functional descriptive material, and the course information of the video machine learning model);
selecting a selected question template from the plurality of question templates according to the course information, and generating a templated question according to the selected question template and the question information([0008-0011], Fig 3A-4: select a template correspond to a selected question template, which is merely a template via library, and generate answer for the templated question as templated question is merely a question according to a template and question, e.g. natural language template questioning, as further described in [0034-0037]);
inputting the templated question into the machine learning model to generate answer information ([0028-0034], Fig 2-4: input the templated question into a machine learning model to generate information for the answer in response to the input question via chatting); and
selecting a selected answer template from the plurality of answer templates according to the course information, and generating the templated answer according to the selected answer template and the answer information ([0008-0011], Fig 2-4: select a template for the answer and generate a formatted answer according to the format and information of the answer via contextual information [0061-0063]).
With respect to claim 4, Yang further discloses wherein the second storage medium is configured to store a plurality of machine learning models, and the second processing device is further configured to execute: selecting the machine learning model from the plurality of machine learning models according to the course information ([0029], [0034]: select a machine learning model in according to the course information, such that one or more models is being selected on implementation, which includes a single machine learning model correspond to the machine learning model) .
With respect to claim 5, Yang further discloses wherein the second processing device is further configured to execute: performing a natural language processing on the question information to obtain at least one keyword corresponding to the question information ([0029], Fig 3A-3H: perform natural language processing to generate text representing the keyword correspond the question information); and
filling the selected question template with the at least one keyword corresponding to the question information to generate the templated question ([0029-0030], Fig 4: fill the template with the text to generate question via contextualized user input/question, as further described in [0037-0038]).
With respect to claim 6, Yang further discloses wherein the second processing device is further configured to execute: performing a natural language processing on the answer information to obtain at least one keyword corresponding to the answer information ([0029], Fig 3A-3H: perform natural language processing to generate text representing the keyword correspond the answer information via text answer, as further disclosed in [0039]); and
filling the selected answer template with the at least one keyword corresponding to the answer information to generate the templated answer ([0029, Fig 4: fill the template with the text to generate answer correspond to the templated/text answer, as further described in [0038-0039]).
With respect to claim 7, Yang further discloses wherein the machine learning model comprises a large language model (the limitation is directed to non-functional descriptive material as the LLM is not being functionally involved in the system; [0038]: LLM as the machine learning model).
With respect to claim 8, Yang further discloses wherein the second processing device is further configured to execute: inputting the templated question and a restrictive condition to the machine learning model to generate the answer information ([0029], Fig 3A-3H: input the templated/structured/textual question and condition set forth by the input, e.g. contextual condition, to the machine learning model to generate answer information as an answer is being generated, as further described in [0037-0039]).
With respect to claim 9, Yang further discloses wherein the restrictive condition comprises a text length (the limitation is directed to non-functional descriptive material as the neither the condition or the text length is being functionally involved in the system; [0029], Fig 3A-3H: the condition comprises a text length via input and contextual text).
With respect to claim 12, Yang further discloses wherein the user instruction comprises a first user instruction and a second user instruction, and the second processing device is further configured to execute: in response to receiving the first user instruction corresponding to the option, obtaining the course information ([0040-0043], Fig 3A: receive a 1st user instruction regarding the course information via 1st user selection); and
in response to receiving the second user instruction corresponding to the input data, obtaining the question information according to the second user instruction ([0040-0043], Fig 3B-3H: obtain the question information according to the user instruction with a 2nd user selection correspond to the input data).
With respect to claim 15, Yang further discloses wherein the course information comprises at least one of the following: a subject, a grade, and a semester (the limitation is directed to non-functional descriptive material that does not impact the functionality of the claimed system; [0027], Fig 3A: the course information at least a subject of the learning video).
With respect to claims 16 and 19, Yang further discloses wherein the graphical user interface is further configured to display a virtual object, in response to the virtual object being triggered, the graphical user interface is configured to display a function window, the function window at least partially overlaps the user image, and the function window has the at least one learning option ([0027-0028], Fig 3A-3H: the virtual object is being triggers displaying of a function window with selection options for learning, and the function window partially overlaps with the user image in the video).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 10-11 and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Yang, as applied to claim 1, in view of Naufel (Pub No. US 2024/0379019).
With respect to claim 10, Yang does not explicitly disclose wherein the first processing device is further configured to execute: in response to displaying the templated answer, receiving a user feedback corresponding to the templated answer; and
the second processing device is further configured to execute: updating the machine learning model according to the user feedback.
However, Naufel discloses wherein the first processing device is further configured to execute: in response to displaying the templated answer, receiving a user feedback corresponding to the templated answer ([0266-0267]: receive user feedback correspond to information include answer); and
the second processing device is further configured to execute: updating the machine learning model according to the user feedback ([0008]: update the machine learning model, e.g. LLM, with user feedback, as further described in [0263-0267]).
Since (i) Yang also discloses receive feedback associate with the answer and update a machine learning model with the feedback ([0083]), and (ii) both Yang Naufel are from the same field of endeavor because both are directed to interactive information provision in a learning system, which is in the same field of endeavor as the claimed invention, it would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify and combine their teachings by incorporate usage of user feedback into a machine learning model update of Naufel into Yang for machine learning update as claimed. The motivation to combine is to provide an effective as well as an adaptive and scalable personalized learning experience (Yang, [0007]; Naufel, [0002]).
With respect to claim 11, the combined teachings of Yang and Naufel further disclose wherein the second processing device is further configured to execute: updating the machine learning model according to reinforcement learning from human feedback (Naufel, [0008], [0263-0267]: update the machine with enhancement/reinforcement from user/human feedback).
With respect to claim 13, Yang further discloses wherein the graphical user interface is configured to display a function window, and wherein the function window comprises two learning options, and the display device is further configured to execute: in response to the other one of the two learning options in the function window being triggered, the graphical user interface displaying a review window ([0041-0042], Fig 3A-3H: display a function window with selection options correspond to the learning options, and display an update window correspond to a review window in response to user selection that triggers a learning option).
Yang does not explicitly disclose the review window comprising a test paper corresponding to the course information.
However, these differences are only found in the nonfunctional descriptive material and are not functionally involved in the steps recited. All the steps of the claimed system would be performed the same regardless of whether the reviewer window comprises a test paper corresponding to the course information or not. Therefore, it would have been obvious to one skilled in the art before the effective filing date of the claimed invention to have any type of data information correspond to the course information at the review window of the display at the graphical user interface because such data does not functionally relate to the steps in the system claimed and because the subjective interpretation of the data does not patentably distinguish the claimed invention.
Also, Naufel discloses a review window comprising a test paper corresponding to the course information ([0247], [0351], Fig 2: a display window with test/assessment material representing the test paper correspond the course information).
Since both Yang Naufel are from the same field of endeavor because both are directed to interactive information provision in a learning system, which is in the same field of endeavor as the claimed invention, it would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify and combine their teachings by incorporate test material display of Naufel into Yang for learning as claimed. The motivation to combine is to provide an effective as well as an adaptive and scalable personalized learning experience (Yang, [0007]; Naufel, [0002]).
With respect to claim 14, Yang discloses wherein the second storage medium further stores a classification model, and the second processing device is further configured to execute:
determining a question classification corresponding to a user learning record according to the classification model, wherein the user learning record comprises at least one of the question information, the course information, the templated question, the answer information, and the templated answer (at least one indicates only need one of the list to read on the limitation;[0030-0034]: store a classification model and determine a question classification at least in view of the selected image correspond to a user record with user information, the record includes at least a chat record with question and answer with respect to the course as further described in [0040-0041]);
adjusting a weight of the question classification according to the user learning record ([0033]: adjust a weight with applying different weight according to user interaction/record); and
generating content according to the weight, wherein the content comprises at least one question corresponding to the question classification, and a number of the at least one question is associated with the weight ([0033-0035]: generate content represented by the output according the wight, and the content comprises different type of information, including classified question with respect user selection).
Yang does not disclose the generated content is directed to a test paper as claimed.
However, this difference is only found in the nonfunctional descriptive material and is not functionally involved in the steps recited. All the steps of the claimed system would be performed the same regardless of whether a test paper is being generated or not. Therefore, it would have been obvious to one skilled in the art before the effective filing date of the claimed invention to generate any type of data information because such data does not functionally relate to the steps in the system claimed and because the subjective interpretation of the data does not patentably distinguish the claimed invention.
Also, Naufel discloses generating a test paper ([0247], [0351], Fig 2: generate a test or an assessment correspond to the test paper since the test paper is merely a type of information being generated).
Since both Yang Naufel are from the same field of endeavor because both are directed to interactive information provision in a learning system, which is in the same field of endeavor as the claimed invention, it would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify and combine their teachings by incorporate test material display of Naufel into Yang for learning as claimed. The motivation to combine is to provide an effective as well as an adaptive and scalable personalized learning experience (Yang, [0007]; Naufel, [0002]).
Examiner Note
Examiner has cited particular columns/paragraph and line numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michelle Owyang whose telephone number is (571)270-1254. The examiner can normally be reached Monday-Friday, 8am-6pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Rones can be reached at (571)272-4085. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHELLE N OWYANG/ Primary Examiner, Art Unit 2168