Prosecution Insights
Last updated: April 19, 2026
Application No. 18/415,018

Security Operations via Augmented Reality Devices

Non-Final OA §103§112
Filed
Jan 17, 2024
Examiner
PATEL, JITESH
Art Unit
2612
Tech Center
2600 — Communications
Assignee
Micron Technology, Inc.
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
91%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
312 granted / 398 resolved
+16.4% vs TC avg
Moderate +12% lift
Without
With
+12.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
14 currently pending
Career history
412
Total Applications
across all art units

Statute-Specific Performance

§101
6.2%
-33.8% vs TC avg
§103
61.3%
+21.3% vs TC avg
§102
3.8%
-36.2% vs TC avg
§112
16.6%
-23.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 398 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation "the first face" in limitation 3. There is insufficient antecedent basis for this limitation in the claim. Claims 2-9 are rejected for depending from claim 1. Claim 2-5 recite the limitation "the first face". There is insufficient antecedent basis for this limitation in the claim. Claim 6 recites the limitation "the face". There is insufficient antecedent basis for this limitation in the claim. Claims 7-9 are rejected for depending from claim 6. Claim 11 recites the limitation "the face". There is insufficient antecedent basis for this limitation in the claim. Claims 12-17 are rejected for depending from claim 11. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4-5 are rejected under 35 U.S.C. 103 as being unpatentable over Kale et al (US 20210400315 A1) in view of Hashimoto et al (US 20240386718 A1). Regarding claim 1, Kale discloses a system (Kale [0022]), comprising: a plurality of cameras, each of the cameras configured to capture images, compress the images via an artificial neural network, and provide first compressed images having embeddings representative of features determined by the artificial neural network (Kale [0232], “I/O devices may include … video cameras (exemplary plurality of cameras)”; [0069], “a stream of input video data to the Artificial Neural Network (ANN) can be analyzed by the Artificial Neural Network (ANN) into identify segments (embeddings representative of features determined by an artificial neural network) associated with different scenes depicted in the video stream. Each video segment can be configured to be compressed as a unit (an ANN to provide first compressed images).”; [0070], “the task of compressing a video stream using an Artificial Neural Network (ANN) can be added to a surveillance camera”); a server computer configured to receive, from the plurality of cameras, second compressed images to generate analytics of embeddings of features in the second compressed images (Kale [0139], “a computer system may retrieve the compressed video file (a computer system/server computer configured to receive, from the plurality of cameras, (second) compressed images)”; [0162], “the decoder decompresses the input video (104) on the fly when the input video (104) is stored into the random access memory as the input (211) to the Artificial Neural Network (201)”; [0163], “The Deep Learning Accelerator (103) executes the instructions (205) to generate the video analytics (102) of the input video (104) (analytics of embeddings of features in an exemplary second compressed images is performed on previously stored compressed video frames/images).”); Kale does not disclose at least one pair of augmented reality glasses having a computing unit configured to communicate with the server computer to determine a match of a second face in a view through the glasses with the first face. However, Hashimoto discloses at least one pair of augmented reality glasses having a computing unit configured to communicate with the server computer to determine a match of a second face in a view through the glasses with the first face (Hashimoto [0059] , “HMD 1 is configured to be connectable with a network server”; Hashimoto [0085], “extract face information from a video of an interviewee shot by the imager 71 (a second face in a view through the glasses/fig. 2)”; [0123], “HMD 1 (pair of augmented reality glasses) controls the communication processor 6 thereby to send a video of the new interviewee shot by the video processor 7 (the imager 71) via the network 33 to the network server 32 (pair of augmented reality glasses having a computing unit configured to communicate with the server computer) which performs the face information detection processing.”; [0176], “main controller 2 compares the face information (face feature) detected in the face information detection processing (step S420) with the face information (face features) saved by the face information saving function 25, and if both are remarkably similar (if a degree of coincidence in the outer shape (contour) of the face is within a preset threshold), determines that the interviewee is known (determine a match of a second face in a view through the glasses with the first face)”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Kale with Hashimoto to incorporate processing for HMDs for the purposes of comparing and identifying viewed objects. This would have enhanced Kale by providing users with additional features for artificial reality use cases. Regarding claim 4, Kale in view of Hashimoto discloses the system of claim 1, wherein the server computer is configured to determine first metrics representative of features of the first face in an image and transmit the first metrics to the computing unit (Hashimoto [0123], “the network server 32 which performs the face information detection processing … the main controller 2 in the HMD 1 receives (only) a result of the face information detection performed by the network server 32 from the network server 32”); and the computing unit is configured to recognize the second face as being corresponding to the first face based on the first metrics (Hashimoto [0176], “the main controller 2 compares the face information (face feature) detected in the face information detection processing (step S420) with the face information (face features) saved by the face information saving function 25”). Regarding claim 5, Kale in view of Hashimoto discloses the system of claim 1, wherein the computing unit is configured to transmit second metrics of the second face to the server computer (Hashimoto [0123], “the main controller 2 in the HMD 1 controls the communication processor 6 thereby to send a video of the new interviewee shot by the video processor 7 (the imager 71) via the network 33 to the network server”); and the server computer is configured to determine that the second face corresponds to the first face based on matching the first metrics and the second metrics (Hashimoto [0123], “the network server 32 which performs the face information detection processing”). Claims 2-3 are rejected under 35 U.S.C. 103 as being unpatentable over Kale in view of Hashimoto and further view of Bunn et al (US 20060190419 A1). Regarding claim 2, Kale in view of Hashimoto discloses the system of claim 1, but does not discloses wherein the server computer is configured to detect an anomaly via a facial expression analysis based on the first face having an expression that is an outlier in expressions on faces on a crowd of people monitored by the cameras. However, Bunn discloses the server computer is configured to detect an anomaly via a facial expression analysis based on the first face having an expression that is an outlier in expressions on faces on a crowd of people monitored by the cameras (Bunn [0016], “high-resolution, high-speed video camera systems with different algorithms to measure fine resolution characteristics of observed subjects (faces on subjects/a crowd of people, monitored by the cameras) such as, but not limited to, measuring pupil dilation of the eyes, sweating, blushing, and other bio-behavioral aspects at the onset (face having an expression that is an outlier in expressions), and notes changes in these aspects thereafter and calibrates them to levels of impairment, intoxication and behavioral changes”; [0036], “the observations can include but are not limited to observing from a few to large crowds of subjects”; [0040], “stress analysis of facial appearance”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Kale further with Bunn to implement facial analysis in crowed situations to detect unusual behavior. This would have further improved Kale by including additional critical features in the system. Regarding claim 3, Kale in view of Hashimoto discloses the system of claim 1, but does not disclose wherein the server computer is configured to detect an anomaly via a behavior change analysis to recognize a pattern associated with an indication of intoxication, sickness, or injury of a person having the first face. However, Bunn discloses the server computer is configured to detect an anomaly via a behavior change analysis to recognize a pattern associated with an indication of intoxication, sickness, or injury of a person having the first face (Bunn [0016], “high-resolution, high-speed video camera systems with different algorithms to measure fine resolution characteristics of observed subjects (faces on subjects/a crowd of people, monitored by the cameras) such as, but not limited to, measuring pupil dilation of the eyes, sweating, blushing, and other bio-behavioral aspects at the onset (face having an expression that is an outlier in expressions), and notes changes in these aspects thereafter and calibrates them to levels of impairment (injury, sickness), intoxication and behavioral changes”; [0036], “the observations can include but are not limited to observing from a few to large crowds of subjects”; [0040], “stress analysis of facial appearance”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Kale further with Bunn to implement facial analysis in crowed situations to detect unusual behavior. This would have further improved Kale by including additional critical features in the system. Claims 6-9 are rejected under 35 U.S.C. 103 as being unpatentable over Kale in view of Hashimoto and further view of Sahin (US 20200337631 A1). Regarding claim 6, Kale in view of Hashimoto discloses the system of claim 1, but does not disclose wherein the augmented reality display includes a highlight of the face in the view through the glasses. However, Sahin discloses the augmented reality display includes a highlight of the face in the view through the glasses (Sahin [0200], “The video feed of a heads-up display, in another example, may be augmented to highlight a face for the individual to look at”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Kale further with Sahin to highlight faces in an HMD view. This would have been done to clearly display faces that are relevant to users. Regarding claim 7, Kale in view of Hashimoto and further view of Sahin discloses the system of claim 6, wherein the augmented reality display further includes a symbol representative of a classification of the anomaly and the symbol is presented next to the second face in the view through the glasses (Sahin [0236], “a feedback algorithm may augment the video feed of a heads-up display of a data collection device to overlay a description of the emotional state of the individual, such as the word “irritated” floating above the individual's head or a simplified cartoon icon (symbol) representing an emotional state such as bored, happy, tired, or angry may supplant the individual's face in the heads-up display or hover hear the individual's face within the heads-up display”). Regarding claim 8, Kale in view of Hashimoto and further view of Sahin discloses the system of claim 7, wherein the server computer configured to determine an identity of a person having the first face, determine a record of the person, and transmit the record to the computing unit (Hashimoto [0132], “when the collateral information on the new interviewee is saved in the network server 32, the HMD 1 can acquire the collateral information on the new interviewee from the network server 32 (the server computer configured to determine an identity of a person having the first face, determine a record of the person, and transmit the record to the computing unit)”; [0137], “The interviewee table (T840) (an identity of a person having the first face and a record of the person) as depicted in FIG. 7 can be saved in the network server 32”); and the computing unit is configured present the record via audio in connection with the augmented reality display (Hashimoto [0189], “collateral information to be … presented … may be voice information to be output from the right speaker 821 or the left speaker 822.”). Regarding claim 9, Kale in view of Hashimoto and further view of Sahin discloses the system of claim 8, wherein the server computer is further configured to provide a representative image of the anomaly to the computing unit for presentation via the glasses in response to a request from a user of the augmented reality glasses (Sahin [0237], “The user may select one of the emoticons 1032 (e.g., through an input device of a wearable data collection device such as a tap, head movement, verbal command, or thought pattern) (a request from a user of the augmented reality glasses). The game may then present feedback to the user (provide a representative image of the anomaly to the computing unit for presentation via the glasses)”). Claims 10 is rejected under 35 U.S.C. 103 as being unpatentable over Kale in view of Bunn and further view of Hashimoto. Regarding claim 10, Kale discloses a method (Kale [0021]), comprising: receiving, in a server computer and from a plurality of cameras each configured to capture images, compressed images captured by the cameras (Kale [0139], “a computer system may retrieve the compressed video file (a computer system/server computer configured to receive, from the plurality of cameras, (second) compressed images)”; [0162], “the decoder decompresses the input video (104) on the fly when the input video (104) is stored into the random access memory as the input (211) to the Artificial Neural Network (201)”); generating, by the server computer, analytics of embeddings of features provided in the compressed images (Kale [0163], “The Deep Learning Accelerator (103) executes the instructions (205) to generate the video analytics (102) of the input video (104) (analytics of embeddings of features in an exemplary second compressed images is performed on previously stored compressed video frames/images)”); Kale does not disclose identifying, by the server computer from the analytics, an anomaly associated with a first face; determining, by the server computer, first metrics representative of features of the first face in an image; and communicating, by the server computer, with a pair of augmented reality glasses having a computing unit configured to detect a second face in a view through the glasses to determine a match of the second face with the first face based on the first metrics. However, Bunn discloses identifying, by the server computer from the analytics, an anomaly associated with a first face (Bunn [0016], “high-resolution, high-speed video camera systems with different algorithms to measure fine resolution characteristics of observed subjects such as, but not limited to, measuring pupil dilation of the eyes, sweating, blushing, and other bio-behavioral aspects at the onset (an anomaly associated with a first face), and notes changes in these aspects thereafter and calibrates them to levels of impairment, intoxication and behavioral changes”; [0036], “the observations can include but are not limited to observing from a few to large crowds of subjects”; [0040], “stress analysis of facial appearance (analytics)”); determining, by the server computer, first metrics representative of features of the first face in an image (Bunn [0016], “high-resolution, high-speed video camera systems with different algorithms to measure fine resolution characteristics of observed subjects such as, but not limited to, measuring pupil dilation of the eyes, sweating, blushing (determining first metrics representative of features of the first face in an image)”); It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Kale with Bunn to implement facial analysis. This would have further improved Kale by including additional critical features in the system. Kale in view of Bunn does not disclose communicating, by the server computer, with a pair of augmented reality glasses having a computing unit configured to detect a second face in a view through the glasses to determine a match of the second face with the first face based on the first metrics. However, Hashimoto discloses communicating, by the server computer, with a pair of augmented reality glasses having a computing unit configured to detect a second face in a view through the glasses to determine a match of the second face with the first face based on the first metrics (Hashimoto [0059] , “HMD 1 is configured to be connectable with a network server”; Hashimoto [0085], “extract face information from a video of an interviewee shot by the imager 71 (a second face in a view through the glasses/fig. 2)”; [0123], “HMD 1 (pair of augmented reality glasses) controls the communication processor 6 thereby to send a video of the new interviewee shot by the video processor 7 (the imager 71) via the network 33 to the network server 32 (pair of augmented reality glasses having a computing unit configured to communicate with the server computer) which performs the face information detection processing.”; [0176], “main controller 2 compares the face information (face feature) detected in the face information detection processing (step S420) with the face information (face features) saved by the face information saving function 25, and if both are remarkably similar (if a degree of coincidence in the outer shape (contour) of the face is within a preset threshold), determines that the interviewee is known (determine a match of a second face in a view through the glasses with the first face)”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Kale further with Hashimoto to incorporate processing for HMDs for the purposes of comparing and identifying viewed objects. This would have enhanced Kale by providing users with additional features for artificial reality use cases. Claims 11-15 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Kale in view of Bunn and further view of Hashimoto and further view of Sahin. Regarding claim 11, Kale in view of Bunn and further view of Hashimoto discloses the method of claim 10, but does not disclose wherein the augmented reality display includes: a highlight of the face in the view through the glasses; and a symbol, representative of a classification of the anomaly, presented next to the second face in the view through the glasses. However, Sahin discloses wherein the augmented reality display includes: a highlight of the face in the view through the glasses (Sahin [0200], “The video feed of a heads-up display, in another example, may be augmented to highlight a face for the individual to look at”); and a symbol, representative of a classification of the anomaly, presented next to the second face in the view through the glasses (Sahin [0236], “a feedback algorithm may augment the video feed of a heads-up display of a data collection device to overlay a description of the emotional state of the individual, such as the word “irritated” floating above the individual's head or a simplified cartoon icon (symbol) representing an emotional state such as bored, happy, tired, or angry may supplant the individual's face in the heads-up display or hover hear the individual's face within the heads-up display”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Kale further with Sahin to highlight faces in an HMD view. This would have been done to clearly display faces that are relevant to users. Regarding claim 12, Kale in view of Bunn and further view of Hashimoto and further view of Sahin discloses the method of claim 11, further comprising: determining, by the server computer, an identity of a person having the first face (Hashimoto [0132], “when the collateral information on the new interviewee is saved in the network server 32, the HMD 1 can acquire the collateral information on the new interviewee from the network server 32 (the server computer configured to determine an identity of a person having the first face, determine a record of the person, and transmit the record to the computing unit)”); retrieving, by the server computer, a record of the person (Hashimoto [0137], “The interviewee table (T840) (an identity of a person having the first face and a record of the person) as depicted in FIG. 7 can be saved in the network server 32”); and transmit, by the server computer, the record to the computing unit to cause the computing unit to present the record via audio in connection with the augmented reality display (Hashimoto [0189], “collateral information to be … presented … may be voice information to be output from the right speaker 821 or the left speaker 822.”). Regarding claim 13, Kale in view of Bunn and further view of Hashimoto and further view of Sahin discloses the method of claim 12, further comprising: transmitting, by the server computer, a representative image of the anomaly to the computing unit to cause the computing unit to present the representative image via the glasses in response to a request from a user of the augmented reality glasses (Sahin [0237], “The user may select one of the emoticons 1032 (e.g., through an input device of a wearable data collection device such as a tap, head movement, verbal command, or thought pattern) (a request from a user of the augmented reality glasses). The game may then present feedback to the user (provide a representative image of the anomaly to the computing unit for presentation via the glasses)”). Regarding claim 14, Kale in view of Bunn and further view of Hashimoto and further view of Sahin discloses the method of claim 13, further comprising: performing, by the server computer, a facial expression analysis of the compressed images (Kale [0139], “a computer system may retrieve the compressed video file (a computer system/server computer configured to receive, from the plurality of cameras, (second) compressed images)”; [0162], “the decoder decompresses the input video (104) on the fly when the input video (104) is stored into the random access memory as the input (211) to the Artificial Neural Network (201)”; [0163], “The Deep Learning Accelerator (103) executes the instructions (205) to generate the video analytics (102) of the input video (104) (analytics of embeddings of features in an exemplary second compressed images is performed on previously stored compressed video frames/images).”); and detecting, by the server computer, the anomaly in response to a determination that the first face has an expression that is an outlier in expressions on faces on a crowd of people monitored by the cameras (Bunn [0016], “high-resolution, high-speed video camera systems with different algorithms to measure fine resolution characteristics of observed subjects (faces on subjects/a crowd of people, monitored by the cameras) such as, but not limited to, measuring pupil dilation of the eyes, sweating, blushing, and other bio-behavioral aspects at the onset (face having an expression that is an outlier in expressions), and notes changes in these aspects thereafter and calibrates them to levels of impairment, intoxication and behavioral changes”; [0036], “the observations can include but are not limited to observing from a few to large crowds of subjects”; [0040], “stress analysis of facial appearance”). Regarding claim 15, Kale in view of Bunn and further view of Hashimoto and further view of Sahin discloses the method of claim 13, further comprising: performing, by the server computer, a behavior change analysis to recognize a pattern associated with an indication of intoxication, sickness, or injury of a person having the first face (Bunn [0016], “high-resolution, high-speed video camera systems with different algorithms to measure fine resolution characteristics of observed subjects (faces on subjects/a crowd of people, monitored by the cameras) such as, but not limited to, measuring pupil dilation of the eyes, sweating, blushing, and other bio-behavioral aspects at the onset (face having an expression that is an outlier in expressions), and notes changes in these aspects thereafter and calibrates them to levels of impairment (injury, sickness), intoxication and behavioral changes”; [0036], “the observations can include but are not limited to observing from a few to large crowds of subjects”; [0040], “stress analysis of facial appearance”). Regarding claim 17, Kale in view of Bunn and further view of Hashimoto and further view of Sahin discloses the method of claim 13, further comprising: receiving, by the server computer from the computing unit, second metrics of the second face (Hashimoto [0227], “The interviewee table (T870) (exemplary second metrics of the second face sent to the server) can be sent from the HMD 1 to the network server 32”); making, by the server computer in response to receiving the second metrics, a determination that the second face corresponds to the first face (Hashimoto [0227], “the network server 32 dedicated to the processings of specifying an interviewee from face feature amounts … is used for face feature amounts …, thereby achieving the processings of specifying an interviewee in the HMD 1”); and providing, by the server computer in response to the determination, information about the anomaly to cause the computing unit to generate the augmented reality display (Sahin [0236], “information regarding the emotional state of at least one individual is presented to a user (1014). For example, a feedback algorithm may augment the video feed of a heads-up display of a data collection device to overlay a description of the emotional state of the individua”). Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Hashimoto. Regarding claim 18, Hashimoto discloses an apparatus (Hashimoto [0052], “information acquiring apparatus”), comprising: a pair of glasses (Hashimoto [0052], “the HMD 1 is activated and then rapidly acquires surrounding information of the HMD 1”); and a computing unit configured to receive, from a server computer, detect a second object image in a view through the glasses, and recognize the second object image as being corresponding to the first object image based on the first metrics (Hashimoto [0123], “the network server 32 which performs the face information detection processing … the main controller 2 in the HMD 1 receives (only) a result of the face information detection performed by the network server 32 from the network server 32”; Hashimoto [0176], “the main controller 2 compares the face information (face feature) detected in the face information detection processing (step S420) with the face information (face features) saved by the face information saving function 25”). Hashimoto does not expressly disclose first metrics of a first object image but suggests the first metrics of a first object image (Hashimoto [0114], “the face information processor 73 performs a processing of detecting an element such as eyes, nose, mouth or the like inside the face contour by a face element detection program.”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to utilize face metrics as suggested by Hashimoto. This would have been done to compare faces in an accurate and error free manner. Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Hashimoto in view of Sahin. Regarding claim 19, Hashimoto discloses the apparatus of claim 18, wherein the computing unit is further configured to generate an augmented reality display in the view to identify the first object image, and wherein the first object image and the second object image are representative of a face of a person (Hashimoto [0020], “when determining that the person can be an interviewee, previously acquires collateral information (exemplary first object image) on the person. Consequently, when the user recognizes the person as an interviewee, the user knows the collateral information on the interviewee (exemplary second object image).”; [0056], “The collateral information is presented by … an image (object image)”; [0246], “In the example of FIG. 17, the HMD 1 displays a name (Jiro Yamada) 17 as the collateral information on the person 16 on the display screen 75.”); and But does not disclose the augmented reality display includes: a highlight of the face in the view through the glasses; and a symbol, representative of a classification of the anomaly, presented next to the second face in the view through the glasses. However, Sahin discloses a highlight of the face in the view through the glasses (Sahin [0200], “The video feed of a heads-up display, in another example, may be augmented to highlight a face for the individual to look at”); and a symbol, representative of a classification of the anomaly, presented next to the second face in the view through the glasses (Sahin [0236], “a feedback algorithm may augment the video feed of a heads-up display of a data collection device to overlay a description of the emotional state of the individual, such as the word “irritated” floating above the individual's head or a simplified cartoon icon (symbol) representing an emotional state such as bored, happy, tired, or angry may supplant the individual's face in the heads-up display or hover hear the individual's face within the heads-up display”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Hashimoto with Sahin to highlight faces and display corresponding symbols in an HMD view. This would have been done to clearly display faces that are relevant to users along with relevant information about the displayed faces. Allowable Subject Matter Claims 16 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b), set forth in this Office action and also rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding claim 16, none of the prior art of record, alone or in combination, disclose “dismissing, by the server computer, an anomaly classification of the representative image associated with the first face in response to an input from a user of the augmented reality glasses.” Regarding claim 20, none of the prior art of record, alone or in combination, disclose “a first portion of the artificial neural network is implemented via a passive neural network; and a second portion of the artificial neural network is implemented via a processor and an accelerator of multiplication and accumulation operations.” Conclusion See the notice of references cited (PTO-892) for prior art made of record, including art that is not relied upon but considered pertinent to applicant's disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JITESH PATEL whose telephone number is (571)270-3313. The examiner can normally be reached 8am - 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said A. Broome can be reached at (571) 272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JITESH PATEL/Primary Examiner, Art Unit 2612
Read full office action

Prosecution Timeline

Jan 17, 2024
Application Filed
Nov 05, 2025
Response after Non-Final Action
Mar 07, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602866
DIGITAL TWIN AUTHORING AND EDITING ENVIRONMENT FOR CREATION OF AR/VR AND VIDEO INSTRUCTIONS FROM A SINGLE DEMONSTRATION
2y 5m to grant Granted Apr 14, 2026
Patent 12597245
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12586313
DAMAGE DETECTION FROM MULTI-VIEW VISUAL DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12579739
2D CONTROL OVER 3D VIRTUAL ENVIRONMENTS
2y 5m to grant Granted Mar 17, 2026
Patent 12579765
DEFINING AND MODIFYING CONTEXT AWARE POLICIES WITH AN EDITING TOOL IN EXTENDED REALITY SYSTEMS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
91%
With Interview (+12.4%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 398 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month