Prosecution Insights
Last updated: April 19, 2026
Application No. 18/370,348

PERCEPTION MODES

Final Rejection §102§103
Filed
Sep 19, 2023
Examiner
NAZRUL, SHAHBAZ
Art Unit
2638
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
2 (Final)
90%
Grant Probability
Favorable
3-4
OA Rounds
2y 1m
To Grant
95%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
569 granted / 634 resolved
+27.7% vs TC avg
Moderate +6% lift
Without
With
+5.5%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
20 currently pending
Career history
654
Total Applications
across all art units

Statute-Specific Performance

§101
2.7%
-37.3% vs TC avg
§103
39.8%
-0.2% vs TC avg
§102
34.0%
-6.0% vs TC avg
§112
10.2%
-29.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 634 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Claims 1-19 are pending. Claims 1, 4, 5, 7-19 are amended. Response to Arguments Applicant’s arguments with respect to claim(s) 1-19 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Objections Claim 18 is objected to because of the following informalities: Claim 18 recites “perception” within claim scope. There are insufficient antecedent bases for the limitation. Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-4, 9-10, 16, 18-19 are rejected under 35 U.S.C. 102(a)(1) and/or 102(a)(2) as being anticipated by Bluming et al. (US 20150127300 A1, hereinafter Bluming). Regarding claim 1, Bluming discloses a method (figs. 7-9, ¶0090, ¶096, ¶0102-0103), comprising: identifying a current level of privacy for a computer system (The sensor hub 120 processes the sensor data 404 based on context 406 pertaining to the sensor input 402 as applied to one or more of the policies 132. Generally, the context 406 generally describes state conditions (e.g., environment and/or system state) that surround input of the sensor input 402. For instance, the context 406 may indicate a current state condition of the computing device 102, e.g., that the processing system 116 is in an inactive state. The context 406 may indicate various other state information, such as time of day, location, calendar events, and so forth. – ¶0071. The sensor hub 120, for example, evaluates rules and/or parameters of the sensor policy based on the sensor data and context to determine whether the sensor policy indicates that based on the sensor data and context, a wake event is to be generated. – ¶0100 Policies include privacy policy 304, fig. 3. Also see Abstract, figs. 3-9); and in response to identifying the current level of privacy: in accordance with a determination that the current level of privacy is a first level of privacy, configuring a first sensor to operate (Embodiments discussed herein enable sensor selection based on context and policy to provide for a variety of different sensor types and configurations, and for detecting a variety of different phenomena. In at least some embodiments, a sensor hub is employed to receive requests for sensor data from various functionalities and to select sensors to provide the sensor data based on application of context to policies that specify parameters for sensor selection – Abstract); and in accordance with a determination that the current level of privacy is a second level of privacy, configuring a second sensor to operate, wherein the second level of privacy is different from the first level of privacy, and wherein the second sensor is different from the first sensor (abstract, The recipe module 128 is representative of functionality to store, manage, and configure sensor recipes 134. Generally, the sensor recipes 134 specify different instances and combinations of sensors for sensing different phenomena. In at least some implementations, individual sensor recipes 134 can include sensors from different sensor systems 110. Thus, the sensor recipes 134 enable fusion of sensors from multiple sensor systems 110 to create custom combinations of sensors that may sense and/or detect various types of phenomena. Based on the policies 132, for instance, different of the sensor recipes 134 can be selected for sensing different phenomena. According to one or more implementations, the sensor hub 120 can cause a graphical user interface (GUI) to be presented that enables user configuration of the sensor recipes 134. Further details concerning recipes are discussed below. – ¶0042. Also see figs. 2-9). Regarding claim 2, Bluming discloses the method of claim 1, wherein the first sensor is a touch sensor (A presence recipe 202b specifies that touch sensor input plus call status may be utilized to determine a presence status of a user. For instance, if a touch input device detects that a user is holding their mobile device but cell phone functionality does not detect that a call is in progress, a user may be determined to be available – ¶0047). Regarding claim 3, Bluming discloses the method of claim 1, wherein the second sensor is an infrared camera (Examples of input devices include a keyboard, a cursor control device (e.g., a mouse), a microphone (e.g., for voice recognition and/or spoken input), a scanner, touch functionality (e.g., capacitive or other sensors that are configured to detect physical touch), a camera (e.g., which may employ visible or non-visible wavelengths such as infrared frequencies to detect movement that does not involve touch as gestures), and so forth. – ¶0118). Regarding claim 4, Bluming discloses the method of claim 1, further comprising: in response to determining the current level of privacy and in accordance with a determination that the current level of privacy is a third level of privacy, configuring a third sensor to operate, wherein the third level of privacy is different from the first level of privacy and the second level of privacy, and wherein the third sensor is different from the first sensor and the second sensor (Abstract indicates that many different levels of privacy and/or context is available to select and configure different set of sensors. In at least some implementations, sensor fusion generally includes combining multiple different sensors (e.g., sensor systems) according to sensor "recipes" that specify different combinations of sensors for sensing different phenomena. For instance, multiple sensor recipes can be defined that each specify a different combination of sensors for sensing device position. Another set of sensor recipes can be defined that each specify a different combination of sensors for sensing user identity, such as for authenticating a user for access to various resources. As further detailed herein, sensor recipes can be defined for sensing a variety of different phenomena. – ¶0018). Regarding claim 9, Bluming discloses the method of claim 4, further comprising: while the current level of privacy is the second level of privacy, receiving first user input; in response to receiving the first user input, changing the current level of privacy from the second level of privacy to the third level of privacy; while the current level of privacy is the third level of privacy, receiving second user input; and in response to receiving the second user input, changing the current level of privacy from the third level of privacy to the second level of privacy (Abstract, According to various implementations, sensor recipes can be selected based on context and policies for a particular device and/or user. "Context" generally refers to system accessible state that informs a system how to interpret sensor input. Examples of context include user identification, user role (e.g. work roles, personal roles, and so forth), detected behavioral patterns, time, user preferences, environmental information, location, weather, historical values for various context considerations, and so forth. In at least some implementations, system learning may also be employed to generate context information. – ¶0019. According to the scenario 400, the processing system 116 is initially in an inactive and/or low power state, such as an off state or a hibernated state. The sensor system 110 and the sensor hub 120, however, are in an active state. – ¶0069. Also see ¶0040, ¶0043, ¶0071-0072). Regarding claim 10, Bluming discloses the method of claim 1, further comprising: after configuring the second sensor to operate, attempting to detect a subject using the second sensor by tracking a position of the subject over time and performing an operation based on a current position of the subject (Location recipes 200 specify different instances and combinations of sensors that can be employed to detect location, e.g., location of the computing device 102 and/or a user of the computing device. For instance, a location recipe 200a specifies that GPS coordinates may be utilized to detect a location of a user, such as based on GPS coordinates detected by the user's device. A location recipe 200b specifies that cell phone triangulation and wireless network information may be utilized to detect a location of a user. A location recipe 200c specifies that wireless network information plus altitude detection information may be utilized to detect a location of a user – ¶0045. A presence recipe 202c specifies that image detection plus time of day may be utilized to determine user presence. For instance, a camera and/or cameras may be activated to detect whether the user is detectable, e.g., is within a viewing area of the cameras. If the user is detected by the camera and the time of day is during normal business hours, the user may be determined to be "available." Otherwise, if the user is not detected by the camera and/or if the user is detected but the time of day is after business hours, the user may be determined to be "away." – ¶0047 Also see ¶0079-0080). Regarding claim 16, Bluming discloses the method of claim 1, further comprising: while the current level of privacy is the second level of privacy, receiving first touch input; and in response to receiving the first touch input, changing the current level of privacy from the first level of privacy to the second level of privacy (Abstract, ¶0055-0057). Regarding claim 18, Bluming discloses a non-transitory computer-readable storage medium (1006, fig. 10, ¶0115) storing one or more programs configured to be executed by one or more processors (1004, fig. 10, ¶0115) of a computer system (1002, fig. 10, ¶0115), the one or more programs including instructions for (¶0114-0118): identifying a current level of privacy for the computer system; and in response to identifying the current level of privacy: in accordance with a determination that the current level of privacy is a first level of perception, configuring a first sensor to operate; and in accordance with a determination that the current level of privacy is a second level of privacy, configuring a second sensor to operate, wherein the second level of privacy is different from the first level of privacy, and wherein the second sensor is different from the first sensor (see substantively similar claim 1 rejection above). Regarding claim 19, Bluming discloses a computer system (1002, fig. 10, ¶0115), comprising: one or more processors (1004, fig. 10, ¶0115); and memory (1006, fig. 10, ¶0115) storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for (¶0114-0118): identifying a current level of privacy for the computer system; and in response to identifying the current level of privacy: in accordance with a determination that the current level of privacy is a first level of privacy, configuring a first sensor to operate; and in accordance with a determination that the current level of privacy is a second level of privacy, configuring a second sensor to operate, wherein the second level of privacy is different from the first level of privacy, and wherein the second sensor is different from the first sensor (see substantively similar claims 1 and 18 above). Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bluming. Regarding claim 12, Bluming discloses the method of claim 11, further comprising: in response to determining the current level of privacy and in accordance with the determination that the current level of privacy is the first level of privacy, maintaining a Generally, "wake events" refer to notification and/or power events that can cause different functionalities to transition from an inactive state to an active state. The wake policy 408, for instance, may specify combinations of contexts and sensor inputs that may cause a wake event to be generated, and/or combinations of contexts and sensor inputs that do not result in a wake event being generated. – ¶0072). Bluming is not found disclosing expressly that display generation component is in an inactive state. However, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to keep the display device (see ¶0118) in an inactive state since, in the context of a wake event has not yet taken place, and there is nothing there to display. Regarding claim 15, Bluming discloses the method of claim 1, except, further comprising: in response to determining the current level of privacy and in accordance with the determination that the current level of privacy is the second level of privacy, displaying an indication of time. However, Bluming discloses under presence recipes 202a based on an occurrence of a particular clock time might potentially function as a passive input for context state determination (fig. 2, ¶0070). The presence recipe can determine a user presence state based on time (¶0026). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to displaying an indication of time, e.g., a clock time, along with displaying the presence indication of a user (e.g. ”away”, “available” or “in a meeting” until such time) when a sensor senses such presence status, because, combining prior art elements ready to be improved according to known method to yield predictable results is obvious. Furthermore, such combination would enhance the versatility of the overall system. Claim(s) 5, 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bluming in view of Lu (US 20230044279 A1). Regarding claim 5, Bluming discloses the method of claim 4, further comprising: except, after configuring the third sensor to operate, detecting a subject using the third sensor, and after detecting the subject using the third sensor, shifting a field of detection of the second sensor to maintain the subject in the field of detection as the subject moves. However, Lu discloses that first field of view of a first sensor is changed to second and third output of a second sensor, used to object tracking so that the object is within a field of view as the subject moves (autonomous vehicle. Abstract, ¶0003, fig. 3, ¶0066-0071). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the invention of Bluming with the teaching of Lu, where when used in an autonomous vehicle, a first sensor in a first context can detect the surrounding of the vehicle to detect an object of interest, and thereafter using another sensor to maintain lock on and track the subject for a specific use context, to obtain, after configuring the third sensor to operate, detecting a subject using the third sensor, and after detecting the subject using the third sensor, shifting a field of detection of the second sensor to maintain the subject in the field of detection as the subject moves, because, combining prior art elements ready to be improved according to known method to yield predictable results is obvious. Furthermore, such combination would enhance the versatility of the overall system. Regarding claim 11, Bluming discloses the method of claim 1, further comprising: in accordance with the determination that the current level of privacy is switching to the first level of privacy, Privacy policies 304 specify rules and parameters for protecting privacy concerns. For instance, the privacy policies 304 may be implemented to protect user privacy, enterprise privacy, data privacy, and so forth. – ¶0057. …the location policies 314 can consider other policies in specifying rules and parameters for determining user and/or device location, such as in the interest of protecting security, privacy, legal considerations, and so forth. – ¶0062). Bluming is not found disclosing expressly the limitation of, ensuring that a direction of one or more sensors is maintained in a predefined direction. However, Lu discloses that first field of view of a first sensor is changed to second and third output of a second sensor, used to object tracking so that the object is within a field of view in a specific direction as the subject moves (autonomous vehicle. Abstract, ¶0003, fig. 3, ¶0066-0071). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the invention of Bluming with the teaching of Lu, where when used in an autonomous vehicle, a first sensor in a first context can detect the surrounding of the vehicle to detect an object of interest, and thereafter using another sensor to maintain lock on and track the subject for a specific use context, to obtain, ensuring that a direction of one or more sensors is maintained in a predefined direction, because, combining prior art elements ready to be improved according to known method to yield predictable results is obvious. Furthermore, such combination would enhance the versatility of the overall system. Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bluming in view of Janjic et al. (US 11399137 B2, hereinafter Janjic). Regarding claim 13, Bluming discloses the method of claim 1, except, further comprising: after configuring the second sensor to operate, detecting a subject using the second sensor; and after detecting the subject using the second sensor, shifting a field of detection of the second sensor to maintain the subject in the field of detection as the subject moves relative to the second sensor. However, Janjic discloses, tracking a moving object to keep the object in the field of view of the capturing camera (Col. 6, lie 60 – Col. 7, line 8). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the invention of Bluming, such that using location policy 314 and under location and/or activity recipes 200 and 206, actively tracking an object so that it is within the field of view of the sensor, to obtain, after configuring the second sensor to operate, detecting a subject using the second sensor; and after detecting the subject using the second sensor, shifting a field of detection of the second sensor to maintain the subject in the field of detection as the subject moves relative to the second sensor, because, combining prior art elements ready to be improved according to known method to yield predictable results is obvious. Furthermore, such combination would enhance the versatility of the overall system. Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bluming in view of Bagwell et al. (US 20210192199 A1, hereinafter Bagwell). Regarding claim 14, Bluming discloses the method of claim 1, except, further comprising: in response to determining the current level of privacy and in accordance with the determination that the current level of privacy is the first level of privacy, shifting a field of detection of the second sensor in a direction away from a subject. However Bagwell discloses that when sensing an obstacle in the environment, an autonomous vehicle takes measures to avoid the obstacle (¶0001). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the invention of Bluming, such that using safety policy 310 under activity recipe 206b to detect an obstacle in an automated vehicle usage, to avoid the obstacle when a sensor detects the obstacle as disclosed by Bagwell, to obtain, in response to determining the current level of privacy and in accordance with the determination that the current level of privacy is the first level of privacy, shifting a field of detection of the second sensor in a direction away from a subject, because, combining prior art elements ready to be improved according to known method to yield predictable results is obvious. Furthermore, such combination would enhance the versatility of the overall system. Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bluming in view of Lu and further in view of Carter (US 9554090 B1). Regarding claim 6, Bluming in view of Lu discloses the method of claim 5, except, further comprising: Bluming: … information that is streamed to and/or from the computing device 102, as well is information that is observed, e.g., environmental state information – ¶0040). However, Carter discloses that after detection of a person, displaying a video stream received from a remote device (see steps b and c, claim 26). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the invention of Bluming in view of Lu, such that, the tracked object in Lu is displayed by streaming in the display device of Bluming, to obtain, after detecting the subject using the third sensor, displaying a video stream received from a remote device, because, combining prior art elements ready to be improved according to known method to yield predictable results is obvious. Furthermore, such combination would enhance the versatility of the overall system. Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bluming in view of FORUTANPOUR (US 20230144091 A1, hereinafter FORUTANPOUR’091). Regarding claim 17, Bluming discloses the method of claim 1, except, further comprising: while the current level of privacy is the second level of privacy, determining that no user input has been received for a threshold amount of time; and in response to determining that no user input has been received for the threshold amount of time, changing the current level of privacy from the second level of privacy to the first level of privacy. However, FORUTANPOUR’091 discloses, changing the perception level on a display if not user review input is not received withing a threshold amount of time (¶0179). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the invention of Bluming of using the teaching of FORUTANPOUR’091 of changing he perception level in case no user input is received within a threshold amount of time, to obtain, while the current level of privacy is the second level of privacy, determining that no user input has been received for a threshold amount of time; and in response to determining that no user input has been received for the threshold amount of time, changing the current level of privacy from the second level of privacy to the first level of privacy, because, combining prior art elements ready to be improved according to known method to yield predictable results is obvious. Allowable Subject Matter Claims 7-8 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Prior arts of record taken alone or in combination fails to reasonably disclose or suggest, Regarding claim 7, after configuring the second sensor to operate, shifting, in a first manner, a field of detection of the second sensor to maintain a subject in the field of detection of the second sensor as the subject moves relative to the second sensor; and after configuring the second sensor to operate detecting the subject using the third sensor, shifting, in a second manner different from the first manner, a field of detection of the third sensor to maintain the subject in the field of detection of the third sensor as the subject moves relative to the third sensor, wherein the second manner is a closer follow of the subject than the first manner. Claim 8 is allowable for being dependent on allowable claim 7. Conclusion The prior and/or pertinent art(s) made of record and not relied upon is considered pertinent to applicant's disclosure, are – KUNTAGOD et al. (US 20130282149 A1), NACHMAN et al. (US 20120254878 A1), who disclose different context adaptive sensor selection algorithms of interest. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAHBAZ NAZRUL whose telephone number is (571)270-1467. The examiner can normally be reached M-Th: 9.30 am-3 pm, 6.30 pm-9 pm, F: 9.30 am-1.30 pm, 4 pm-8 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lin Ye can be reached on 571-272-7372. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHAHBAZ NAZRUL/Primary Examiner, Art Unit 2638
Read full office action

Prosecution Timeline

Sep 19, 2023
Application Filed
May 20, 2024
Response after Non-Final Action
Sep 06, 2025
Non-Final Rejection — §102, §103
Nov 10, 2025
Response Filed
Feb 20, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587761
IMAGING APPARATUS, DRIVE METHOD OF IMAGING APPARATUS, AND PROGRAM
2y 5m to grant Granted Mar 24, 2026
Patent 12578626
CAMERA DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12581766
SOLID-STATE IMAGING DEVICE AND ELECTRONIC EQUIPMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12579832
LIDAR MANAGED IMAGE GENERATION
2y 5m to grant Granted Mar 17, 2026
Patent 12563293
AUTOMATIC FOCUS CONTROL DEVICE, OPERATION METHOD OF AUTOMATIC FOCUS CONTROL DEVICE, OPERATION PROGRAM OF AUTOMATIC FOCUS CONTROL DEVICE, AND IMAGING APPARATUS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
90%
Grant Probability
95%
With Interview (+5.5%)
2y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 634 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month