Prosecution Insights
Last updated: April 19, 2026
Application No. 19/234,357

HEAD MOUNTED PROCESSING APPARATUS

Non-Final OA §103
Filed
Jun 11, 2025
Examiner
LU, WILLIAM
Art Unit
2624
Tech Center
2600 — Communications
Assignee
Maxell, Ltd.
OA Round
1 (Non-Final)
71%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
78%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
425 granted / 595 resolved
+9.4% vs TC avg
Moderate +6% lift
Without
With
+6.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
31 currently pending
Career history
626
Total Applications
across all art units

Statute-Specific Performance

§101
5.2%
-34.8% vs TC avg
§103
68.4%
+28.4% vs TC avg
§102
9.8%
-30.2% vs TC avg
§112
11.4%
-28.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 595 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-15 filed July 14th 2025 are pending in the current action. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 2, 4-6, 9-11, 14, and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Simari et al. (US2019/0371067) in view of Lee et al. (US2017/0148218) Consider claim 1, where Simari teaches an information processing apparatus comprising: a display; (See Simari ¶77 where augmented reality content appears on the client system’s display) a camera configured to capture an image outside the information processing apparatus; (See Simari ¶84 where the method may obtain an image from a camera of the client system 103) a memory; a processing circuit (See Simari Fig 8) configured to: execute a content stored in the memory, in response to receiving a power off signal for turning off a power of the information processing apparatus, set an object included in an image captured by the camera as a registration key, store an execution state of the content associated with the registration key as key content pair information, (See Simari ¶76-78 where at the end of an AR session, e.g., when the user turns off the client system 103 or moves to a different location, the 3D model that corresponds to the feature points may be stored with the associated WIFI, BLUETOOTH, and GPS metadata. The tracking algorithm may detect sets of 3D points in images of the real-world environment, which are referred to as feature points, and identify correlations between the real-world feature points and previously-seen feature points. The AR application may generate a 3D model of the AR environment based on the feature points on these real-world objects. Thus, the feature points serve as a key to the 3D model content forming a key content pair that is stored when the client system is turned off.) in response to receiving a power on signal for turning on the power of the information processing apparatus, control the camera to capture an image outside the information processing apparatus, and determine whether the key content pair information is stored in the memory, (See Simari Fig. 5 and ¶76-78 where further, when the camera 201 is opened, re-localization may be performed to identify the client system 103's location. During re-localization, the door knob, trashcan, and other points previously detected and stored in map may be recognized. The AR content, e.g., a note, may then be displayed at the corresponding locations in the AR environment. ) in response to determining that the key content pair information is stored in the memory, determine whether the registration key included in the key content pair information is made conformity with an object included in the image captured by the camera, in response to determining that the registration key is made conformity with the object, restore the execution state of the content associated with the registration key, (See Simari Fig. 5 and ¶76-78 where further, when the camera 201 is opened, re-localization may be performed to identify the client system 103's location. During re-localization, the door knob, trashcan, and other points previously detected and stored in map may be recognized. The AR content, e.g., a note, may then be displayed at the corresponding locations in the AR environment. ) Simari teaches determining if mapped content exists, however Simari does not explicitly teach in response to determining that the key content pair information is not stored in the memory or that the registration key is not made conformity with the object, display an initial screen on the display. However, in an analogous field of endeavor Lee teaches in response to determining that the key content pair information is not stored in the memory or that the registration key is not made conformity with the object, display an initial screen on the display. (See Lee Fig. 4 and ¶83-85 where content is initially displayed in step 412 and when no mapped content is found at step 413 the flow chart loops back to the content displayed at step 412) Therefore, it would have been obvious for one of ordinary skill in the art to modify the process of Simari to do nothing when no mapped content exists as taught by Lee. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using a known technique for providing a default option when no content exists. Consider claim 2, where Simari in view of Lee teaches the information processing apparatus according to claim 1, wherein the information processing apparatus is worn on a head of a user. (See Simari ¶89 where the device is an augmented/virtual reality device) (See Lee ¶41 where the device may be a head mounted device) Consider claim 4 The information processing apparatus according to claim 1, wherein the object is a wall or an interior of a room. (See Simari ¶77 where the table and a wall, light switch, door knob, trash can, and so on may be used to generate the feature points for the real-world objects) Consider claim 5, where Simari in view of Lee teaches the information processing apparatus according to claim 1, wherein the execution state of the content is information indicating a reproduction portion or a progress portion of the content. (See Simari ¶37, 97 where a first user 101 may leave a message for a second user 102 with their dog 104. When the second user 102 walks into an area near the dog 104, a notification may be presented indicating that the dog has a message. Thus, presenting a reproduction of the message to the first user or an updated status message for the second user) Consider claim 6, where Simari teaches an information processing apparatus comprising: a display; (See Simari ¶77 where augmented reality content appears on the client system’s display) an operational input interface; a camera configured to capture an image outside the information processing apparatus; (See Simari ¶84 where the method may obtain an image from a camera of the client system 103) a memory; a processing circuit configured to: in response to, detect an object included in an image captured by the camera, set the detected object as a registration key, store an execution state of a content associated with the registration key as key content pair information, the content being stored in the memory, in response to activating the information processing apparatus after shutting off a power of the information processing apparatus or shifting the information processing apparatus to a sleep mode, control the camera to capture an image outside the information processing apparatus, (See Simari ¶76-78 where at the end of an AR session, e.g., when the user turns off the client system 103 or moves to a different location, the 3D model that corresponds to the feature points may be stored with the associated WIFI, BLUETOOTH, and GPS metadata. The tracking algorithm may detect sets of 3D points in images of the real-world environment, which are referred to as feature points, and identify correlations between the real-world feature points and previously-seen feature points. The AR application may generate a 3D model of the AR environment based on the feature points on these real-world objects. Thus, the feature points serve as a key to the 3D model content forming a key content pair that is stored when the client system is turned off.) and determine whether the key content pair information is stored in the memory, in response to determining that the key content pair information is stored in the memory, (See Simari Fig. 5 and ¶76-78 where further, when the camera 201 is opened, re-localization may be performed to identify the client system 103's location. During re-localization, the door knob, trashcan, and other points previously detected and stored in map may be recognized. The AR content, e.g., a note, may then be displayed at the corresponding locations in the AR environment. ) determine whether the registration key included in the key content pair information is made conformity with an object included in the image captured by the camera, in response to determining that the registration key is made conformity with the object, restore the execution state of the content associated with the registration key, (See Simari Fig. 5 and ¶76-78 where further, when the camera 201 is opened, re-localization may be performed to identify the client system 103's location. During re-localization, the door knob, trashcan, and other points previously detected and stored in map may be recognized. The AR content, e.g., a note, may then be displayed at the corresponding locations in the AR environment.) Simari teaches determining if mapped content exists, however Simari does not explicitly teach in response to determining that the key content pair information is not stored in the memory or that the registration key is not made conformity with the object, display an initial screen on the display. However, in an analogous field of endeavor Lee teaches in response to determining that the key content pair information is not stored in the memory or that the registration key is not made conformity with the object, display an initial screen on the display. (See Lee Fig. 4 and ¶83-85 where content is initially displayed in step 412 and when no mapped content is found at step 413 the flow chart loops back to the content displayed at step 412) Therefore, it would have been obvious for one of ordinary skill in the art to modify the process of Simari to do nothing when no mapped content exists as taught by Lee. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using a known technique for providing a default option when no content exists. Simari teaches camera capture, however Simari does not explicitly teach receiving a predetermined instruction from a user of the information processing apparatus via the operational input interface. However, in an analogous field of endeavor Lee teaches receiving a predetermined instruction from a user of the information processing apparatus via the operational input interface. (See Lee ¶3, 78-79 where after the camera is activated a photograph is taken) Therefore, it would have been obvious for one of ordinary skill in the art to modify the scanning operation in Simari (See Simari ¶8 where the user is asked to open the client system's camera to scan for the real-world object) would be a photograph operation as taught by Lee. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of allowing the user to perform the operations they have been asked to do. Consider claim 9, where Simari in view of Lee teaches the information processing apparatus according to claim 6, wherein the object is a wall or an interior of a room. (See Simari ¶77 where the table and a wall, light switch, door knob, trash can, and so on may be used to generate the feature points for the real-world objects) Consider claim 10, where Simari in view of Lee teaches the information processing apparatus according to claim 6, wherein the execution state of the content is information indicating a reproduction portion or a progress portion of the content. (See Simari ¶37, 97 where a first user 101 may leave a message for a second user 102 with their dog 104. When the second user 102 walks into an area near the dog 104, a notification may be presented indicating that the dog has a message. Thus, presenting a reproduction of the message to the first user or an updated status message for the second user) Consider claim 11, where Simari teaches an information processing apparatus for executing a content, comprising: (See Simari ¶77 where augmented reality content appears on the client system’s display) an operational input interface; a camera configured to capture an image outside the information processing apparatus; (See Simari ¶84 where the method may obtain an image from a camera of the client system 103) a memory; a processing circuit configured to: in response to, detect an object included in an image captured by the camera, set the detected object as a registration key, store an execution state of the content associated with the registration key as key content pair information, in response to activating the information processing apparatus after shutting off a power of the information processing apparatus or shifting the information processing apparatus to a sleep mode, control the camera to capture an image outside the information processing apparatus, (See Simari ¶76-78 where at the end of an AR session, e.g., when the user turns off the client system 103 or moves to a different location, the 3D model that corresponds to the feature points may be stored with the associated WIFI, BLUETOOTH, and GPS metadata. The tracking algorithm may detect sets of 3D points in images of the real-world environment, which are referred to as feature points, and identify correlations between the real-world feature points and previously-seen feature points. The AR application may generate a 3D model of the AR environment based on the feature points on these real-world objects. Thus, the feature points serve as a key to the 3D model content forming a key content pair that is stored when the client system is turned off.) and determine whether the key content pair information is stored in the memory, in response to determining that the key content pair information is stored in the memory, (See Simari Fig. 5 and ¶76-78 where further, when the camera 201 is opened, re-localization may be performed to identify the client system 103's location. During re-localization, the door knob, trashcan, and other points previously detected and stored in map may be recognized. The AR content, e.g., a note, may then be displayed at the corresponding locations in the AR environment. ) determine whether the registration key included in the key content pair information is made conformity with an object included in the image captured by the camera, in response to determining that the registration key is made conformity with the object, restore the execution state of the content associated with the registration key, (See Simari Fig. 5 and ¶76-78 where further, when the camera 201 is opened, re-localization may be performed to identify the client system 103's location. During re-localization, the door knob, trashcan, and other points previously detected and stored in map may be recognized. The AR content, e.g., a note, may then be displayed at the corresponding locations in the AR environment.) Simari teaches determining if mapped content exists, however Simari does not explicitly teach in response to determining that the key content pair information is not stored in the memory or that the registration key is not made conformity with the object, display an initial screen on the display. However, in an analogous field of endeavor Lee teaches in response to determining that the key content pair information is not stored in the memory or that the registration key is not made conformity with the object, display an initial screen on the display. (See Lee Fig. 4 and ¶83-85 where content is initially displayed in step 412 and when no mapped content is found at step 413 the flow chart loops back to the content displayed at step 412) Therefore, it would have been obvious for one of ordinary skill in the art to modify the process of Simari to do nothing when no mapped content exists as taught by Lee. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using a known technique for providing a default option when no content exists. Simari teaches camera capture, however Simari does not explicitly teach receiving a predetermined instruction from a user of the information processing apparatus via the operational input interface. However, in an analogous field of endeavor Lee teaches receiving a predetermined instruction from a user of the information processing apparatus via the operational input interface. (See Lee ¶3, 78-79 where after the camera is activated a photograph is taken) Therefore, it would have been obvious for one of ordinary skill in the art to modify the scanning operation in Simari (See Simari ¶8 where the user is asked to open the client system's camera to scan for the real-world object) would be a photograph operation as taught by Lee. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of allowing the user to perform the operations they have been asked to do. Consider claim 14, where Simari in view of Lee teaches the information processing apparatus according to claim 11, wherein the object is a wall or an interior of a room. (See Simari ¶77 where the table and a wall, light switch, door knob, trash can, and so on may be used to generate the feature points for the real-world objects) Consider claim 15, where Simari in view of Lee teaches the information processing apparatus according to claim 11, wherein the execution state of the content is information indicating a reproduction portion or a progress portion of the content. (See Simari ¶37, 97 where a first user 101 may leave a message for a second user 102 with their dog 104. When the second user 102 walks into an area near the dog 104, a notification may be presented indicating that the dog has a message. Thus, presenting a reproduction of the message to the first user or an updated status message for the second user) Claim(s) 3, 8, and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Simari in view of Lee as applied to claim 1 above, in further view of Nguyen et al. (US2018/0349946) Consider claim 3, where Simari in view of Lee teaches the information processing apparatus according to claim 1, wherein the object is a wall, light switch, table, door knob or trashcan. (See Simari ¶77 where the table and a wall, light switch, door knob, trash can, and so on may be used to generate the feature points for the real-world objects) however, Simari does not explicitly teach a clock, a calendar, a sofa or a bookshelf. However, in an analogous field of endeavor Nguyen teaches a sofa. (See Nguyen ¶43 where the data object may be a sofa) Therefore, it would have been obvious for one of ordinary skill in the art that the object recognition performed in Simari would also be able to recognize other common objects. Consider claim 8, where Simari in view of Lee teaches the information processing apparatus according to claim 6, wherein the object is a wall, light switch, table, door knob or trashcan. (See Simari ¶77 where the table and a wall, light switch, door knob, trash can, and so on may be used to generate the feature points for the real-world objects) however, Simari does not explicitly teach a clock, a calendar, a sofa or a bookshelf. However, in an analogous field of endeavor Nguyen teaches a sofa. (See Nguyen ¶43 where the data object may be a sofa) Therefore, it would have been obvious for one of ordinary skill in the art that the object recognition performed in Simari would also be able to recognize other common objects. Consider claim 13, where Simari in view of Lee teaches the information processing apparatus according to claim 11, wherein the object is a wall, light switch, table, door knob or trashcan. (See Simari ¶77 where the table and a wall, light switch, door knob, trash can, and so on may be used to generate the feature points for the real-world objects) however, Simari does not explicitly teach a clock, a calendar, a sofa or a bookshelf. However, in an analogous field of endeavor Nguyen teaches a sofa. (See Nguyen ¶43 where the data object may be a sofa) Therefore, it would have been obvious for one of ordinary skill in the art that the object recognition performed in Simari would also be able to recognize other common objects. Claim(s) 7 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Simari in view of Lee as applied to claim 1 above, in further view of Um et al. (US2015/0067580) Consider claim 7, where Simari in view of Lee teaches The information processing apparatus according to claim 6, however they do not explicitly teach further comprising a proximity sensor, wherein, in response to detecting that the information processing apparatus is removed from a head of the user by using the proximity sensor, the processing circuit shuts of the power off the information processing apparatus or shifts the information processing apparatus to the sleep mode. However, in an analogous field of endeavor Um teaches further comprising a proximity sensor, wherein, in response to detecting that the information processing apparatus is removed from a head of the user by using the proximity sensor, the processing circuit shuts of the power off the information processing apparatus or shifts the information processing apparatus to the sleep mode. (See Sum ¶88 where the wearable device can perform a smart power management in a manner of minimizing power consumption according to a sensed state based on information sensed by the proximity sensor 610. If the state of the wearable device changes from the active state to the inactive state, the wearable device can perform such a function as stopping, bookmarking, storing, and the like of data in playing. Despite the wearable device becomes the inactive state, sensing information is continuously checked by the proximity sensor 610 to perform a function more adaptive to an intention of a user. If the inactive state is maintained for more than a predetermined time period, it may be able to automatically make the wearable device operate in a non-active mode (sleep mode).) Therefore, it would have been obvious for one of ordinary skill in the art that the user turning off the device operation taught in Simari could be performed using a proximity sensor as taught by Um. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known methods of turning a device off to yield predictable results. Consider claim 12, where Simari in view of Lee teaches the information processing apparatus according to claim 11, however they do not explicitly teach further comprising a proximity sensor, wherein, in response to detecting that the information processing apparatus is removed from a head of the user by using the proximity sensor, the processing circuit shuts of the power off the information processing apparatus or shifts the information processing apparatus to the sleep mode. However, in an analogous field of endeavor Um teaches further comprising a proximity sensor, wherein, in response to detecting that the information processing apparatus is removed from a head of the user by using the proximity sensor, the processing circuit shuts of the power off the information processing apparatus or shifts the information processing apparatus to the sleep mode. (See Um ¶88 where the wearable device can perform a smart power management in a manner of minimizing power consumption according to a sensed state based on information sensed by the proximity sensor 610. If the state of the wearable device changes from the active state to the inactive state, the wearable device can perform such a function as stopping, bookmarking, storing, and the like of data in playing. Despite the wearable device becomes the inactive state, sensing information is continuously checked by the proximity sensor 610 to perform a function more adaptive to an intention of a user. If the inactive state is maintained for more than a predetermined time period, it may be able to automatically make the wearable device operate in a non-active mode (sleep mode).) Therefore, it would have been obvious for one of ordinary skill in the art that the user turning off the device operation taught in Simari could be performed using a proximity sensor as taught by Um. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known methods of turning a device off to yield predictable results. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM LU whose telephone number is (571)270-1809. The examiner can normally be reached 10am-6:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached at 571-270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. WILLIAM LU Primary Examiner Art Unit 2624 /WILLIAM LU/Primary Examiner, Art Unit 2624
Read full office action

Prosecution Timeline

Jun 11, 2025
Application Filed
Jul 14, 2025
Response after Non-Final Action
Mar 20, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592191
PIXEL DRIVING CIRCUIT AND DRIVING METHOD THEREFOR, AND DISPLAY PANEL AND DISPLAY APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12591307
APPARATUS AND METHOD FOR DETERMINING AN INTENT OF A USER
2y 5m to grant Granted Mar 31, 2026
Patent 12585054
SUNROOF SYSTEM FOR PERFORMING PASSIVE RADIATIVE COOLING
2y 5m to grant Granted Mar 24, 2026
Patent 12566328
OPTICAL SCANNING DEVICE AND IMAGE FORMING APPARATUS
2y 5m to grant Granted Mar 03, 2026
Patent 12566502
Methods and Systems for Controlling and Interacting with Objects Based on Non-Sensory Information Rendering
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
71%
Grant Probability
78%
With Interview (+6.5%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 595 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month