DETAILED ACTION
This action is in response to the claim amendments filed 01/23/2026. Claims 1-27 are pending with claims 1, 9 and 17 currently being amended, and claims 25-27 newly added.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-5, 8-13, 16-21, 24, 25 and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Ray et al. [US20200394935], hereinafter Ray, in view of Wee [US20150010889].
Regarding claim 1, Ray discloses a method comprising:
at a device having a processor and one or more sensors (Fig. 1):
obtaining, via the one or more sensors, sensor data in a physical environment; identifying an object or an activity in the physical environment based on the sensor data ([0036], “In an exemplary embodiment, an image recognition engine is used to locate and identify objects in the environment, based on data provided by a camera sensor located in the environment or the camera associated with the AR device 112”);
determining a learning objective corresponding to a language ([0053], “a flow chart of a method 300 for using augmented reality for assisting speech development of multiple languages”) and a current learning level of the user in the language (abstract, “determining a confusion level of the target user based on a use of the word in the conversation”); and
in accordance with a determination that the object or activity satisfies a relevancy criterion with respect to the language and current learning level of the learning objective of the user (abstract, “implementing, by the processor, an augmented reality technique based on the confusion level of the target user”): controlling rendering of an extended-reality (XR) environment by spatially positioning language teaching content at a three-dimensional location determined based on a position of the identified object or activity in the physical environment (Fig. 11, [0055], “Step 403 searches for an object in the environment that correlates with the detected word spoken in the different language than the other words. Step 404 determines whether an object can be located in the physical environment. If no, step 405 generates the object within the environment using augmented reality. If yes, then step 406 highlights the physical object using augmented reality”).
However, Ray does not explicitly disclose identifying a current context of a user of the device; and that the current context satisfies an availability criterion.
Nevertheless, Wee teaches, in a like invention, identifying a current context of a user of the device; and that the current context satisfies an availability criterion ([0044], “The service provision server 100 may receive context information from the user terminal 200, may autonomously collect context information, or may receive event information using a user's scheduler, operating in conjunction with the service provision server 100, in response to a request from the user” and [0052], “The context information may include time information, season information, weather information, issue information, theme information, event information received from the user terminal 200, surrounding environment information, emotion information or location information received from the user terminal 200. In this case, the service provision server 100 may collect time information about time or a date from itself; and may receive information about weather or traffic context from an external server or from a service administrator. Furthermore, issue information, such as a K-pop craze or a stock market crash, and theme information, such as the vacation season, the entrance exam season, the graduation season or Valentine Day, may be received from a service administrator in real time. Meanwhile, since a user event, surrounding environment information, emotion, and location correspond to the special context of the user, they may be collected from the user terminal 200. In particular, the location information may be received using the GPS of the user terminal 200, and the surrounding environment information may be received via the camera or sensor of the user terminal 200. However, when the event information is received from the scheduler of the user at step S200, foreign language learning content may be extracted using the received event information. In this case, at step S300, foreign language learning content suitable for user context may be extracted by searching stored foreign language learning content in the database 110, or may be generated by combining multiple pieces of content in accordance with user context information and user information”).
Thus, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method disclosed by Ray, to have the current context identified and use an availability criterion based on the current context to provide language teaching content, as taught by Wee, in order to closely relate the language teaching content to the user’s current context so that the user would have an immersive experience in learning to achieve better learning result.
Regarding claim 2, the combination of Ray and Wee discloses the method of claim 1, wherein the sensor data comprises images of the physical environment or audio of the physical environment (Ray, [0036], “In an exemplary embodiment, an image recognition engine is used to locate and identify objects in the environment, based on data provided by a camera sensor located in the environment or the camera associated with the AR device 112”).
Regarding claim 3, the combination of Ray and Wee discloses the method of claim 1, wherein providing language teaching content for the object or activity comprises positioning the language teaching content within an extended reality (XR) environment based on a position of the object or activity (Ray, [0036], “If the object is physically present in the environment, the isolation module 132 alerts the target user to a location of the object within the environment”).
Regarding claim 4, the combination of Ray and Wee discloses the method of claim 1 further comprising, in accordance with the determination that the object or activity satisfies the relevancy criterion with respect to the learning objective of the user and that the current context satisfies the availability criterion, obtaining the language teaching content for the object or activity based on a user level or a user history determined based on observing use of the language by the user (Ray, [0055], “step 409 renders a translation in the AR environment using augmented reality” and Wee, [0052], “In this case, at step S300, foreign language learning content suitable for user context may be extracted by searching stored foreign language learning content in the database 110, or may be generated by combining multiple pieces of content in accordance with user context information and user information”).
Regarding claim 5, the combination of Ray and Wee discloses the method of claim 1 further comprising, in accordance with a determination the object or activity does not satisfy the relevancy criterion with respect to the learning objective of the user or that the current context does not satisfies the availability criterion, forgo providing the language teaching content (Ray, [0044], “If the target user's confusion level exceeds the pre-determined threshold, the implementation module 134 implements a second augmented reality technique that is different than the first augmented reality technique. The second augmented reality technique is designed to distract the target user from the conversation”).
Regarding claim 8, the combination of Ray and Wee discloses the method of claim 1, wherein the object or activity satisfying the relevancy criterion is based on determining that the object, the activity, or a type of the physical environment relates to a lesson topic (Ray, [0029], “Exemplary embodiments of environment 207 include one or more rooms of a house, a car, a floor of a building, an office, a kitchen of a restaurant, a classroom of a school” and [0055], “At step 402, the system detects a word from a sentence of the conversation that is spoken in another language from the other words in the sentence or conversation. Step 403 searches for an object in the environment that correlates with the detected word spoken in the different language than the other words”).
Regarding claims 9-13 and 16, please refer to the claim rejections of claims 1-5 and 8.
Regarding claims 17-21 and 24, please refer to the claim rejections of claims 1-5 and 8.
Regarding claim 25, the combination of Ray and Wee discloses the method of claim 1, wherein the language teaching content is spatially positioned proximate to the object or activity to which it relates, and wherein the three- dimensional location is determined based on a three-dimensional (3D) representation of the physical environment or positions of objects within the physical environment relative to a 3D coordinate system (Ray, Fig. 7 and [0048], “In the illustrated embodiment, the first speaker 201 states, “We need to place a new silla in the corner of our living room.” In response to the detection that the word “silla” (chair) is in a different language than the other words in the sentence, the system searches for a chair in the environment. Because a chair is not located in the environment, the system generates object 210c which is rendered as a chair in the augmented reality environment of the target user 203, and isolates object 210c, shown as heavier line thickness in FIG. 7.”).
Regarding claim 27, the combination of Ray and Wee discloses the method of claim 1, wherein assessing whether the current context satisfies an availability criterion corresponding to whether a language lesson is appropriate now comprises determining, based on one or more of: a current activity level of the user, a current cognitive load of the user, a time of day, a location of the user, a current schedule of the user, a user-defined preference for receiving language teaching content, or whether the user is currently engaged in an interaction requiring an attention of the user (Wee, [0044], “The service provision server 100 may receive context information from the user terminal 200, may autonomously collect context information, or may receive event information using a user's scheduler, operating in conjunction with the service provision server 100, in response to a request from the user”).
Claim(s) 6, 14 and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ray, in view of Wee, further in view of Kawasaki Fortner et al. [US20210183261], hereinafter Kawasaki Fortner.
Regarding claim 6, the combination of Ray and Wee discloses the method of claim 1, wherein the teaching content comprises: determining a user level (Wee, [0049], “At step S100, the service provision server 100 may receive user information from the user terminal 200. In this case, the user information may include the user's age, gender, vocation, foreign language level, hobby, field of interest or character”); detecting a re-occurrence of the object or activity following the providing of the language teaching content (Ray, [0055], “At step 402, the system detects a word from a sentence of the conversation that is spoken in another language from the other words in the sentence or conversation. Step 403 searches for an object in the environment that correlates with the detected word spoken in the different language than the other words”); and based on detecting the re-occurrence of the object or activity and the determined user level, providing a second language teaching content different than the language teaching content (Ray, [0055], “step 409 renders a translation in the AR environment using augmented reality” and Wee, [0052], “In this case, at step S300, foreign language learning content suitable for user context may be extracted by searching stored foreign language learning content in the database 110” --- because the user context can change, the language teaching content can be different based on changing user context under BRI).
However, the combination of Ray and Wee does not disclose a prompt for the user to provide a response by saying a word or phrase, and wherein the method further comprises: determining a user level based on a response to the prompt.
Nevertheless, Kawasaki Fortner teaches in a like invention, using a prompt for the user to provide a response by saying a word or phrase, and determining a user level based on a response to the prompt in foreign language learning ([0022], “In general, methods consistent with the present disclosure enable the user U to provide mastery-indicating inputs to interactive assessments (e.g., prompts) that mimic natural conversation with a live person. The mastery-indicating inputs may include a spoken word, a spoken phrase, a spoken sentence, a spoken utterance, a gesture (e.g., with the user U's hand(s)), a facial expression, a sung pitch, a sung word, a sung phrase, etc.” and [0040], “The method 500A further comprises assessing, in step 540, a level of user mastery of the first subject matter based on the at least one mastery-indicating input”).
Thus, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method disclosed by the combination of Ray and Wee, to have the prompt and determining a user level based on a response to the prompt, as taught by Kawasaki Fortner, in order to assess the user’s level more accurately and thus providing better learning content for the user.
Regarding claims 14 and 22, please refer to the claim rejection of claim 6.
Claim(s) 7, 15 and 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ray, in view of Wee, further in view of Kim et al. [US20210132688], hereinafter Kim.
Regarding claim 7, the combination of Ray and Wee discloses the method of claim 1, wherein the object or activity satisfying the relevancy criterion is based on determining that there is an indication of interest in the object or the activity (Fig. 11, 402). However, the combination of Ray and Wee does not disclose wherein determining that there is the indication of interest in the object or the activity is based on determining that: the user's gaze corresponds to the object or activity, or the user has contacted the object.
Nevertheless, Kim teaches in a like invention, determining the indication of interest in the object or the activity is based on determining that: the user's gaze corresponds to the object or activity ([0045], “In at least one embodiment, there may be various objects or occurrences in video frame 102 that may be of interest to a viewer, or may attract a viewer's attention, such that a gaze direction of this viewer will intersect a display of video frame 102 at a specific location” and [0065], “An increasing variety of industries and applications… smart real-time language translation in video chat applications”).
Thus, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method disclosed by the combination of Ray and Wee, to have the determining the indication of interest in the object based on determining that the user's gaze corresponds to the object, as taught by Kim, in order to provide multiple methods to detect user’s interest since different users might have different ways to express interest.
Regarding claims 15 and 23, please refer to the claim rejection of claim 7.
Claim(s) 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ray, in view of Wee, further in view of PAHUD et al. [US20180314484], hereinafter PAHUD.
Regarding claim 26, the combination of Ray and Wee discloses the method of claim 1. However, the combination of Ray and Wee does not explicitly disclose wherein controlling rendering of the extended-reality (XR) environment further comprises positioning and sizing the language teaching content to avoid obstructing a view of the user of other people or objects in the physical environment.
Nevertheless, PAHUD teaches in a like invention, positioning and sizing the digital content to avoid obstructing a view of the user of other people or objects in the physical environment ([0054], “the collaborative view positioning component 318 can position the rendered set of visual data so that it does not obstruct the first collaborative participant user's view of the second collaborative participant user”).
Thus, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method disclosed by the combination of Ray and Wee, to have the positioning and sizing the digital content to avoid obstructing a view of the user of other people or objects in the physical environment, as taught by PAHUD, in order to provide a full and better view to the user.
Response to Arguments
Applicant's arguments filed 01/23/2026 have been fully considered but they are not persuasive.
With respect to the claim rejections under 35 U.S.C. 103, applicant argues “Ray's description of detecting a word spoken in another language and rendering a translation in the AR environment does not teach or suggest "determining a learning objective corresponding to a language and a current learning level of the user in the language" and determining that the object or activity satisfies a relevancy criterion "with respect to the language and current learning level of the learning objective," as recited in amended Claim 1” (p. 11). Examiner respectfully disagrees. Ray discloses determining a learning objective corresponding to a language ([0053], “a flow chart of a method 300 for using augmented reality for assisting speech development of multiple languages”) and a current learning level of the user in the language (abstract, “determining a confusion level of the target user based on a use of the word in the conversation”), and in accordance with a determination that the object or activity satisfies a relevancy criterion with respect to the language and current learning level of the learning objective of the user ([0053], “for assisting speech development of multiple languages” and abstract, “implementing, by the processor, an augmented reality technique based on the confusion level of the target user”).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YINGCHUAN ZHANG whose telephone number is (571)272-1375. The examiner can normally be reached 8:00 - 4:30 M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xuan Thai can be reached at (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YINGCHUAN ZHANG/Primary Examiner, Art Unit 3715