Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed on 12/23/25 has been entered and made of record. Claims 1-20 are cancelled. Claims 21-40 are new. Claims 21-40 are pending.
Response to Arguments
Applicant’s arguments with respect to claims 21, 28 and 35 have been fully considered but they are moot because the arguments do not apply to the references being used in the current rejection.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 21-26, 28-33 and 35-39 are rejected under 35 U.S.C. 103 as being unpatentable over Bocaletti (US 20200285554 A1) in view of Shi et al. (WO 2022082440 A1).
As to Claim 21, Bocaletti teaches A head-mounted device comprising:
a display; one or more processors; and memory, comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform operations for (Bocaletti, Fig 1):
detecting a user input from a user to create a routine (Bocaletti discloses “generating mapped events associated with the user activity; constructing a session path from the mapped events associated with the user activity; generating a session path representing the session path” in claim 1; “the collection of such information include capturing their clickstream, also referred to as a click path or session path, which is the sequence of clicks or other user gestures a user typically utilizes when interacting with a website” in [0022]);
in response to detecting the user input, causing generation of a visual graph (Bocaletti, Fig 6-9); and
arranging, based on the relationship between the first node and the second node of the plurality of nodes, the plurality of nodes in the visual graph into a plurality of segments (Bocaletti discloses “Returning now to FIG. 4, the mapped events are then grouped by their associated sessions in step 408 to generate various session paths” in [0059]. Here, the grouping provides a plurality of groups of mapped events and constructs a session path for each group of mapped events associated with the user activity.);
presenting the visual graph to the user at the display; and storing the visual graph in a data structure for the routine (Bocaletti, Fig 1, 9-10.)
Bocaletti is silent on a head-mounted display with a camera. The combination of Shi further teaches following limitations:
detecting, via the head-mounted device, a user input from a user of the head-mounted device; capturing, via the camera of the head-mounted device, image data of a plurality of user interactions with one or more objects in a real-world environment (Shi discloses “wearable devices (e.g., watches, glasses, gloves, headwear (e.g., hats, helmets, virtual reality headsets, augmented reality headsets, head-mounted devices (HMDs), headbands)… gesture recognition devices” in [0067]; “Sub-step S801: In response to the user's click operation, identify the target to be followed within the image area near the click location” in [0127]; “In particular, in response to the user's click operation on the image captured by the shooting device, the target to be followed can be determined from the image. Click operations include single-click, double-click, and long-press operations” in [0128]; “In one embodiment, selecting the target to be followed through user operation includes: in response to a user's click operation, identifying the target to be followed within an image region near the click location; and labeling the category of the target to be followed and/or the location of the target to be followed.” in [0238]);
determining, based on the image data corresponding to the plurality of user interactions with the one or more objects, one or more properties of the one or more objects (Shi discloses “In one embodiment, an image captured by a camera is detected, and a target to be followed is determined from the image captured by the camera by identifying features of the target to be followed, such as facial features; and/or, the shape and outline of the target to be followed; and/or, the motion attributes of the target to be followed” in [0153]; “In one embodiment, the target to be followed is an infant. The features of the target to be followed are extracted, including facial features; and/or, shape contour; and/or, motion attributes, and the above features are added to the feature library corresponding to the first target. This will enrich the feature library and thus continuously improve the accuracy of recognition.” in [0219]);
defining, based on the image data corresponding to the plurality of user interactions with the one or more objects, a plurality of nodes in the visual graph including the one or more properties (Bocaletti discloses “generating mapped events associated with the user activity; constructing a session path from the mapped events associated with the user activity; generating a session path representing the session path” in claim 1; “the collection of such information include capturing their clickstream, also referred to as a click path or session path, which is the sequence of clicks or other user gestures a user typically utilizes when interacting with a website” in [0022]; “In certain embodiments, the session path analysis system 118 may be implemented… to generate a session path analysis graph 266 corresponding to a particular sequence of interactions between a user 202 and a website 216 during a session. As used herein, a session path analysis graph 266 broadly refers to a graphical representation of a user's 202 interactions with a website 216 during a particular session” in [0031]; “In various embodiments, nodes in the session path graph 656 representing a web page element's or feature's name or functionality may be assigned a visual attribute, such as certain colors, to facilitate visualization.” in [0079], see also Fig 9. Shi also discloses “In one embodiment, as shown in FIG9, the target 905 to be followed is identified in the image area near the user's click position, and the category 907 of the target 905 to be followed is marked, such as baby; and/or the location of the target 905 to be followed is marked, such as rectangle 909 and/or icon 911” in [0132]);
specifying, based on the image data corresponding to the plurality of user interactions with the one or more objects, a relationship between a first node and a second node of the plurality of nodes in the visual graph (Bocaletti discloses “In certain embodiments, the session path graph 656 may be implemented to represent a user's sequence of interactions with various web page elements or features as graph edges, likewise familiar to those of skill in the art” in [0071], see also [0074] and Fig 9-10. Shi further discloses “In particular, in response to the user's click operation on the image captured by the shooting device, the target to be followed can be determined from the image. Click operations include single-click, double-click, and long-press operations.” in [0128]; “In one embodiment, selecting the target to be followed through user operation includes: in response to a user's click operation, identifying the target to be followed within an image region near the click location; and labeling the category of the target to be followed and/or the location of the target to be followed” in [0238]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Bocaletti with the invention of Shi so as to have HMD as a user device for monitoring user interaction.
As to Claim 22, Bocaletti in view of Shi teaches The head-mounted device of claim 21, wherein the plurality of user interactions comprises one or more of:
an interaction in which the user touches an object of the one or more objects, an interaction in which an object of the one or more objects appears in a view of the user (Bocaletti discloses “In certain embodiments, the interactions may be with one or more elements or features of a web page 246 implemented on the website 216” in [0027]; see also [0020, 0034]. Shi, [0128]),
an interaction in which an object of the one or more objects disappears from a view of the user, and an interaction in which an object of the one or more objects is present in a view of the user when the routine is performed by the user
(Bocaletti discloses “In certain embodiments, the perceived information may include a "between page" interaction, such as the use of hyperlinks to traverse from a current web page 306 to another page 306” in [0043], see also [0020]. Shi, [0128, 0132, 0136, 0238].)
As to Claim 23, Bocaletti in view of Shi teaches The head-mounted device of claim 21, wherein the user input received from the user includes one or more of: a natural language statement made by the user, a gesture made by the user, and a gaze of the user (Bocaletti discloses a user interaction such as a mouse click or other user gesture in [0047].)
As to Claim 24, Bocaletti in view of Shi teaches The head-mounted device of claim 21, wherein at least one other node of the plurality of nodes associates at least one object of the one or more objects with at least one other object of the one or more objects (Bocaletti discloses “In certain embodiments, the session path analysis system 118 may be implemented… to generate a session path analysis graph 266 corresponding to a particular sequence of interactions between a user 202 and a website 216 during a session. As used herein, a session path analysis graph 266 broadly refers to a graphical representation of a user's 202 interactions with a website 216 during a particular session” in [0031]; see also Fig 9-10.)
As to Claim 25, Bocaletti in view of Shi teaches The head-mounted device of claim 21, wherein the relationship between the first node and the second node of the plurality of nodes is a sequential relationship representing that an object of the one or more objects corresponding to the first node occurs in a sequence before an object of the one or more objects corresponding to the second node
(Bocaletti discloses “As likewise used herein, a session path broadly refers to a sequence of two or more interactions between a user 202 and a website 216 during a particular session” in [0030], see also Fig 8-10.)
As to Claim 26, Bocaletti in view of Shi teaches The head-mounted device of claim 21, wherein the plurality of segments comprises a start segment with a node of the plurality of nodes representing a beginning of the routine and an end segment that includes a node of the plurality of nodes representing an ending of the routine (Bocaletti discloses “As likewise used herein, a session path broadly refers to a sequence of two or more interactions between a user 202 and a website 216 during a particular session” in [0030]; “As used herein, a clickstream, also referred to as a click path or session path, broadly refers to the sequence of clicks, or other user gestures, that describes the path a user takes through a website 304 during a session” in [0050], see also Fig 8-10.)
Claim 28 recites similar limitations as claim 21 but in a method form. Therefore, the same rationale used for claim 21 is applied.
Claim 29 is rejected based upon similar rationale as Claim 22.
Claim 30 is rejected based upon similar rationale as Claim 23.
Claim 31 is rejected based upon similar rationale as Claim 24.
Claim 32 is rejected based upon similar rationale as Claim 25.
Claim 33 is rejected based upon similar rationale as Claim 26.
Claim 35 recites similar limitations as claim 21 but in a computer readable media form. Therefore, the same rationale used for claim 21 is applied.
Claim 36 is rejected based upon similar rationale as Claim 22.
Claim 37 is rejected based upon similar rationale as Claim 24.
Claim 38 is rejected based upon similar rationale as Claim 25.
Claim 39 is rejected based upon similar rationale as Claim 26.
Claims 27, 34 and 40 are rejected under 35 U.S.C. 103 as being unpatentable over Bocaletti (US 20200285554 A1) in view of Shi et al. (WO 2022082440 A1) and Bienfait et al. (US 2023/0281023 A1).
As to Claim 27, Bocaletti in view of Shi teaches The head-mounted device of claim 21, wherein the one or more processors are further caused to perform operations for:
collecting, via the camera, second data comprising information representing a second plurality of user interactions by the user with respect to a second set of the one or more objects in the real-world environment (Bocaletti also discloses “capturing website data, the website data representing user activity; generating mapped events associated with the user activity” in claim 1. Shi also discloses “the system acquires images captured by the camera and identifies the target to be followed from the images captured by the camera” in [0075]);
recognizing an additional routine from the collected second data, the additional routine corresponding to the routine (Bocaletti discloses “generating mapped events associated with the user activity; constructing a session path from the mapped events associated with the user activity; generating a session path representing the session path” in claim 1; see also Fig 9-12); and
triggering one or more automations in response to recognizing the routine
(Bienfait discloses “detection of user interactions with one or more elements of a user interface of the application, storage of data representing the one or more elements of the user interface of the application and the detected user interactions, and generation, based on the stored data, of an automation executable to automatically execute the detected user interactions on the user interface of the application” in Abstract; “Interaction monitor user interface 1212 includes automation trigger 1214. Automation trigger 1214 may be selected by user 1240 to initiate generation of an automation based on a set of interaction traces stored within interaction traces 1252.” in [0062], see also [0067].)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Bocaletti and Shi with the teaching of Bienfait so as to generate an automation based on a set of stored interaction traces (Bienfait, [0062]).
Claim 34 is rejected based upon similar rationale as Claim 27.
Claim 40 is rejected based upon similar rationale as Claim 27.
Conclusion
THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WEIMING HE whose telephone number is (571)270-1221. The examiner can normally be reached Monday-Friday, 8:30am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached on 571-272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Weiming He/
Primary Examiner, Art Unit 2611