Prosecution Insights
Last updated: April 19, 2026
Application No. 16/215,613

MOBILE DEVICE CONTEXT AWARE DETERMINATIONS

Non-Final OA §103
Filed
Dec 10, 2018
Examiner
MAPA, MICHAEL Y
Art Unit
2645
Tech Center
2600 — Communications
Assignee
Cellepathy Inc.
OA Round
11 (Non-Final)
71%
Grant Probability
Favorable
11-12
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
518 granted / 728 resolved
+9.2% vs TC avg
Strong +27% interview lift
Without
With
+27.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
39 currently pending
Career history
767
Total Applications
across all art units

Statute-Specific Performance

§101
4.9%
-35.1% vs TC avg
§103
63.1%
+23.1% vs TC avg
§102
11.4%
-28.6% vs TC avg
§112
11.4%
-28.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 728 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/21/25 has been entered. Response to Amendment The applicant has amended the following: Claims: 114 have been amended. Claims: 102-113 and 115-121 have not been amended. Claims: 1-101 have been cancelled. Allowable Subject Matter Claims 102-113 are allowed. Claim 116 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claims 102-113 are allowed and dependent claim 116 are objected as allowable because the closest prior art found fails to disclose, teach or suggest either alone or render obvious in a combined teachings of the prior art, the uniquely distinct features in the specific order, structure and combination of limitations together as a whole of the limitations recited in independent claims 102 and 109 and of the limitations recited of the dependent claim 116 in combination with all of the limitations of the base claim and any intervening claims from which claim 116 is dependent upon. The dependent claims 103-108, 110-113 are found to be allowable based on their dependency on the allowed independent claims 102 and 109 and for the same reasons as indicated above. Response to Arguments Applicant’s arguments filed 11/21/25 with regards to claims 114-115 and 117-121 have been fully considered but they are not persuasive. APPLICANT’S ARGUMENTS: The applicant argues that though cited portions of Hill do teach that “user gesture identification consistent with driving may include identifying movements and positions of the user’s eyes, head, shoulders and hands, and any combinations thereof”, this still does not teach or suggest anything with respect to “based on the head angle of the user in relation to the device as determined based on the processing of the one or more visual captures in relation to the vehicle, computing a likelihood that a user of the device, as depicted within the one or more visual captures and as present within the vehicle is at least one of a driver or a passenger” as in claimed 114. For example, movements / positions of the users eyes, head, shoulders and hands as in Hill does not teach or suggest anything with respect to computing a likelihood that a user of the device is at least one of a driver or a passenger based on the head angle of the user in relation to the device as determined based on the processing of the one or more visual captures in relation to the vehicle, computing a likelihood that a user of the device, as depicted within the one or more visual captures and as present within the vehicle is at least one of a driver or a passenger as encompassed by claim 114. Moreover, MPEP 2143.01 directs that “if a proposed modification would render the prior art invention being modified unsatisfactory for its intended purpose, then there is no suggestion or motivation to make the proposed modification” But that is precisely what the Office action does here. Specifically, the Office action proposes substituting functionality from one embodiment in Hill with functionality from another embodiment in Hill. But in doing so, the proposed modification would render the first embodiment unsatisfactory for its intended purpose and as such there is no suggestion or motivation to make the prosed modification”. For this reason as well, the rejection should be withdrawn. Because Hill does not teach, disclose or suggest at least the above-referenced features of claim 114 (and, additionally, because the Office’s proposed modification would render Hill’s first embodiment unsatisfactory for its intended purpose), claim 114 is patentable over the art of record. (See Pages 8-9 of Applicant’s Arguments filed on 11/21/25). EXAMINER’S RESPONSE: The examiner respectfully disagrees. Contrary to the applicant’s arguments the teachings of Hill does disclose the applicant’s argued limitations of “based on the head angle of the user in relation to the device as determined based on the processing of the one or more visual captures in relation to the vehicle, computing a likelihood that a user of the device, as depicted within the one or more visual captures and as present within the vehicle is at least one of a driver or a passenger” and furthermore, contrary to the applicant’s arguments, the proposed modification does not render Hill’s first embodiment unsatisfactory for its intended purpose as will be apparent in the following explanations provided below. To begin with, the examiner would like to note that Applicant’s Arguments filed on 11/21/25 appears to be similar to arguments filed on 05/02/25 to which the examiner has already provided a detailed explanation and response on the previous office action filed on 05/21/25 and as there are no additional arguments provided against the examiner’s explanation on the previous office action, the examiner can only reiterate the examiner’s previous response herein. The examiner directs the applicant to the highlighted portions of Hill, Fig. 1 & [0006]-[0008] & [0026] seen below: PNG media_image1.png 884 712 media_image1.png Greyscale [0007] One embodiment may identify the user gestures based on images taken by its cameras. Gestures of a driver are different from a passenger's. In an embodiment with a rearward facing camera and facial recognition software for distinguishing the user's face from all of the objects comprising the background scenery (the interior components of the vehicle and the objects outside the vehicle), or for identifying when the user's face tilt changes in a nodding motion, the user interaction analyzer of the embodiment may decide that the user is a driver. Another embodiment with an eye movement tracking sensor may detect that the user is a driver by identifying an up and down darting motion of the user's eye. Alternatively, if a user interaction analyzer automatically takes pictures of the user every 10 seconds, identifies the objects in the pictures, and finds in a picture that at least one of the user's hands rests on the steering wheel (a part of the background scenery), then the user interaction analyzer may determine that a driver gesture is detected. Similarly, various combinations of eye, head, hand, and shoulder movements and positions may be used for identifying whether the user is driving. [0006] In addition to the traveling speed, indications of whether a user is driving include the gestures of the user, the objects near the user, and the orientation of the mobile device being used. The mobile device may include a user interaction analyzer to detect these driver indications. For example, user gesture identification consistent with driving may include identifying movements and positions of the user's eyes, head, shoulders, and hands, and any combinations thereof. Objects near the user may be used to identify whether the user is sitting in the driver seat. As to the orientation of the mobile device, a driver is not likely to tilt the mobile device flat against her lap when using the device. Therefore, if a user uses the mobile device with the screen facing up, the user is less likely the driver. [0026] While the invention has been described by means of specific embodiments, numerous modifications and variations could be made thereto by those ordinarily skilled in the art without departing from the score and spirit disclosed herein. [0008] Additionally, the relative positions of the objects may indicate where the user is sitting in a vehicle. Examples of the objects include an instrument cluster, a steering wheel, the road, other vehicles, and so forth. An embodiment may use a forward facing camera to captures images for identifying the objects. When the embodiment identifies from the image a certain shape that resembles a part of an instrument cluster, the user may be sitting in the driver seat and therefore a driver. Also, if a steering-wheel-like object appears or the meeting of the dashboard with the windshield in the image taken by the forward facing camera, the embodiment may determine that the user is holding the mobile device while driving. Another example is a road with road markings delineating the lanes, if the marking on the left is closer to the user, the user may be sitting in the driver seat (in most countries). Various embodiments may use various objects and combinations thereof to identify the user's seat. As can be seen from the highlighted portions of Hill seen above, Hill, [0007] discloses one embodiment may identify the user gestures based on images taken (i.e. reads on the one or more visual captures and reads on as depicted within the one or more visual captures) by its cameras and wherein a rearward facing camera and facial recognition software for distinguishing the user’s face from all of the objects comprising the background scenery such as the interior components of the vehicle (i.e. reads on received in relation to the vehicle and reads on and as present in the vehicle) and the objects outside the vehicle and for identifying (i.e. reads on processing to determine) when the user’s face tilt (i.e. reads on head angle) changes in a nodding motion, the user interaction analyzer of the embodiment may decide that the user is a driver (i.e. reads on computing a likelihood that the user of the device is at least one of a driver of the vehicle) and in another embodiment with an eye movement tracking sensor may detect that the user is a driver by identifying an up and down darting motion of the user’s eye and various combinations of eye, head, hand, and shoulder movements and positions may be used for identifying whether the user is driving and Hill, [0006] discloses in addition to the traveling speed, indications of whether a user is driving include the gestures of the user, the objects near the user and the orientation of the mobile device being used and the user gesture identification consistent with driving may include identifying movements and positions of the user’s eyes, head, shoulders and hands and any combinations thereof and objects near the user may be used to identify whether the user is sitting in the driver seat and as to the orientation of the mobile device, a driver is not likely to tilt the mobile device flat against her lap when using the device and if a user uses the mobile device with the screen facing up (i.e. reads on in relation to the device), the user is less likely the driver (i.e. reads on computing a likelihood that the user of the device is at least one of a passenger) and Hill, Fig. 1 shows that different criteria corresponding to driver gesture 120, objects near driver seat 130 and mobile device orientation 140 are utilized together to determine whether the user is a driver which clearly indicates to one of ordinary skill in the art to recognize based on the combination of the cited teachings together as a whole that the images taken by the camera are utilized to determine whether the head or face tilts in a nodding motion as well as determining the angle and the orientation of the mobile device in combination/relation with each other in order to determine that the user is using the mobile device with the screen facing up in order to determine whether the user is driving or is a passenger as a nodding motion would indicate to one of ordinary skill in the art that the user is consistently looking at or using the mobile device and as such clearly reads on applicant’s argued limitations of “based on the head angle of the user in relation to the device as determined based on the processing of the one or more visual captures in relation to the vehicle, computing a likelihood that a user of the device, as depicted within the one or more visual captures and as present within the vehicle is at least one of a driver or a passenger”. In addition, contrary to the applicant’s arguments, a combination of the various embodiments of Hill would not render the prior art invention being modified unsatisfactory for its intended purpose as such a modification does not change the intended purpose of the invention of detecting whether the user of the mobile device is driving or is a passenger and furthermore, Hill, Fig. 1 shows that different criteria corresponding to driver gesture 120, objects near driver seat 130 and mobile device orientation 140 are utilized together to determine whether the user is a driver and Hill, [0026] discloses that numerous modifications and variations could be made thereto by those ordinary skilled in the art without departing from the score and spirit disclosed and Hill, [0007] discloses various combinations of eye, head, hand, and shoulder movements and positions may be used for identifying whether the user is driving and Hill, [0008] discloses that various embodiments may use various objects and combinations thereof to identify the user’s seat which clearly indicates to one of ordinary skill in the art that the different embodiments and criteria utilized in determining whether a user is the driver or the passenger can be combined together. Therefore, the argued limitations read upon the cited references or are written broad such that they read upon the cited references, as follows: Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 114 and 117-121 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hill et al. (US Patent Publication 2015/0031349 herein after referenced as Hill). Regarding claim 114, Hill discloses: A non-transitory computer readable medium having instructions stored thereon that, when executed by a processing device, cause the processing device to perform operations comprising: a head angle of a user; based on the head angle of the user in relation to the device or a passenger; and modifying an implementation of a restriction at the device based on the computed likelihood (Hill, [0004]-[0005] discloses detect whether a user of the mobile device is driving (i.e. reads on computing a likelihood that the user of the device, as present in the vehicle, is at least one of a driver of the vehicle) and if the user is found driving, certain distracting features of the mobile device may be temporarily disabled (i.e. reads on modifying an implementation of a restriction at the device based on the computed likelihood) by a lock-out mechanism of the mobile device including texting, youtube, emails, movies and so forth and discloses the vehicle may be able to communicate with the mobile device of its traveling speed; Hill, [0006] discloses in addition to the traveling speed, indications of whether a user is driving include the gestures of the user (i.e. reads on head angle), the objects near the user and the orientation of the mobile device (i.e. reads on in relation to the device) being used and the mobile device may include a user interaction analyzer (i.e. reads on processing device) to detect these driver indications for example user gesture identification consistent with driving may include identifying (i.e. reads on determine) movements and positions of the user’s eyes, head (i.e. reads on head angle), shoulders and hands and any combinations thereof as to the orientation of the mobile device (i.e. reads on in relation to a device), a driver is not likely to tilt the mobile device flat against her lap when using the device and therefore, if a user uses the mobile device with the screen facing up, the user is less likely the driver (i.e. reads on computing a likelihood that a user of the device is at least one of a driver) and discloses the vehicle (i.e. reads on vehicle) may be able to communicate with the mobile device of its traveling speed. One of ordinary skill in the art would recognize that it is inherent for a complex machinery such as a mobile device to include a processor, memory and software instructions in communication with the user interaction analyzer in order to be able to perform the various functionalities of the invention). Hill discloses in one embodiment that a combination of traveling speed, user gestures such as eyes, head shoulders and hands as well as the orientation of the mobile device in relation to the user is utilized in determining whether the user is a passenger or a driver but fails to disclose in the same embodiment the use of visual captured images in determining and therefore fails to disclose “receiving one or more visual captures in relation to a vehicle; processing the one or more visual captures to determine, in relation to a device, a head angle of a user;” and “the head angle of the user in relation to the device as determined based on the processing of the one or more visual captures received in relation to the vehicle” and “computing a likelihood that the user of the device, as depicted within the one or more visual captures and as present in the vehicle, is at least one of a driver of the vehicle or a passenger;” In a different embodiment, Hill discloses: receiving one or more visual captures in relation to a vehicle; (Hill, [0007]-[0008] discloses one embodiment may identify the user gestures based on images taken (i.e. reads on receiving one or more one or more visual captures) by its cameras wherein a rearward facing camera and facial recognition software for distinguishing the user’s face from all of the objects comprising the background scenery such as the interior components of the vehicle and the objects outside the vehicle and discloses the relative positions of the objects may indicate where the user is sitting in a vehicle (i.e. reads on in relation to a vehicle) and may use a forward facing camera to captures images for identifying the objects; Hill, [0020] discloses the preferred embodiment also adopts the orientation of the mobile device to detect by the user interaction analyzer whether the user is a driver and if the screen is facing away from the direction of travel (i.e. reads on in relation to a vehicle) and tilting slightly forward such as being within 45 degrees of vertical angle or any predetermined range of orientation of a driver’s mobile device, the user may be the driver and on the contrary if the user is a passenger the user may tilt her device flat against her lap with the screen facing up. Therefore, one of ordinary skill in the art would recognize based on the combination of the cited teachings together as a whole that the images taken by the camera includes images of the user in relation to the vehicle as well as components of the vehicle that are received in order to be able to perform the identification of the user gestures). processing the one or more visual captures to determine, in relation to a device, a head angle of a user; the head angle of the user in relation to the device as determined based on the processing of the one or more visual captures received in relation to the vehicle; computing a likelihood that the user of the device, as depicted within the one or more visual captures and as present in the vehicle, is at least one of a driver of the vehicle or a passenger; (Hill, [0007] discloses one embodiment may identify the user gestures based on images taken (i.e. reads on the one or more visual captures and reads on as depicted within the one or more visual captures) by its cameras and wherein a rearward facing camera and facial recognition software for distinguishing the user’s face from all of the objects comprising the background scenery such as the interior components of the vehicle (i.e. reads on received in relation to the vehicle and reads on and as present in the vehicle) and the objects outside the vehicle and for identifying (i.e. reads on processing to determine) when the user’s face tilt (i.e. reads on head angle) changes in a nodding motion, the user interaction analyzer of the embodiment may decide that the user is a driver (i.e. reads on computing a likelihood that the user of the device is at least one of a driver of the vehicle) and in another embodiment with an eye movement tracking sensor may detect that the user is a driver by identifying an up and down darting motion of the user’s eye and various combinations of eye, head, hand, and shoulder movements and positions may be used for identifying whether the user is driving; Hill, [0006] discloses in addition to the traveling speed, indications of whether a user is driving include the gestures of the user, the objects near the user and the orientation of the mobile device being used and the user gesture identification consistent with driving may include identifying movements and positions of the user’s eyes, head, shoulders and hands and any combinations thereof and objects near the user may be used to identify whether the user is sitting in the driver seat and as to the orientation of the mobile device, a driver is not likely to tilt the mobile device flat against her lap when using the device and if a user uses the mobile device with the screen facing up (i.e. reads on in relation to the device), the user is less likely the driver (i.e. reads on computing a likelihood that the user of the device is at least one of a passenger); Hill, [0020] discloses the preferred embodiment also adopts the orientation of the mobile device to detect by the user interaction analyzer whether the user is a driver and if the screen is facing away from the direction of travel and tilting slightly forward such as being within 45 degrees of vertical angle or any predetermined range of orientation of a driver’s mobile device, the user may be the driver and on the contrary if the user is a passenger the user may tilt her device flat against her lap with the screen facing up; Hill, Fig. 1 shows that different criteria corresponding to driver gesture 120, objects near driver seat 130 and mobile device orientation 140 are utilized together to determine whether the user is a driver. Therefore, one of ordinary skill in the art would recognize based on the combination of the cited teachings together as a whole that the images taken by the camera are utilized to determine whether the head or face tilts in a nodding motion as well as determining the angle and the orientation of the mobile device in combination/relation with each other in order to determine that the user is using the mobile device with the screen facing up in order to determine whether the user is driving or is a passenger as a nodding motion would indicate to one of ordinary skill in the art that the user is consistently looking at or using the mobile device). Therefore, at the time before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to modify the invention of Hill to incorporate the teachings of the different embodiments for the purpose of providing the system with a means to identify the user gestures (Hill, [0007]) and for the purpose of conforming to the intent of the invention to modify and combine the various different embodiment (Hill, Fig. 1 & [0008], [0006], [0017] & [0026]) and for the purpose of making the system more dynamic and adaptable by providing the system with various different alternatives in design and functionality, thereby allowing the system to handle a number of various different combination of specific design structure and scenarios and preventing the system from being limited to a single specific design structure and scenario and furthermore, one of ordinary skill in the art would recognize based on the guidelines to rationales supporting a conclusion of obviousness seen on MPEP 2143, that the modification would involve use of a simple substitution of one known element and base device (i.e. performing a process of an embodiment of determining whether a user is a driver or a passenger as taught by Hill) with another known element and comparable device utilizing a known technique (i.e. performing a process of a similar embodiment of determining whether a user is a driver or a passenger with additional and/or alternative features and functionalities as taught by Hill) to improve the similar devices in the same way and to obtain the predictable result of the system performing a process of an embodiment of determining whether a user is a driver or a passenger (i.e. as taught by Hill) and is dependent upon the specific intended use, design incentives, needs and requirements (i.e. such as due to teachings of a known standard, current technology, conservation of resources, personal preferences, economic considerations, etc.) of the user and the system as has been established in MPEP 2144.04. Regarding claim 117, Hill discloses: The non-transitory computer readable medium of claim 114, (see claim 114). wherein the memory further stores instructions to cause the system to perform operations comprising: determining an orientation angle of the device relative to the ground; and comparing the head angle to determine a relationship between the head angle and the orientation angle of the device; wherein computing a likelihood that the user of the device is at least one of a driver or a passenger comprises computing the likelihood based on the relationship (Hill, [0006] discloses in addition to the traveling speed, indications of whether a user is driving include the gestures of the user, the objects near the user and the orientation of the mobile device being used and the mobile device may include a user interaction analyzer to detect these driver indications for example user gesture identification consistent with driving may include identifying movements and positions of the user’s eyes, head, shoulders and hands and any combinations thereof as to the orientation of the mobile device, a driver is not likely to tilt the mobile device flat against her lap when using the device. Therefore, if a user uses the mobile device with the screen facing up, the user is less likely the driver; Hill, [0020] discloses the preferred embodiment also adopts the orientation of the mobile device to detect by the user interaction analyzer whether the user is a driver and if the screen is facing away from the direction of travel and tilting slightly forward such as being within 45 degrees of vertical angle or any predetermined range of orientation of a driver’s mobile device, the user may be the driver and on the contrary if the user is a passenger the user may tilt her device flat against her lap with the screen facing up). Regarding claim 118, Hill discloses: The non-transitory computer readable medium of claim 117, (see claim 117). wherein the orientation angle of the device comprises a substantially flat orientation (Hill, [0006] discloses in addition to the traveling speed, indications of whether a user is driving include the gestures of the user, the objects near the user and the orientation of the mobile device being used and the mobile device may include a user interaction analyzer to detect these driver indications for example user gesture identification consistent with driving may include identifying movements and positions of the user’s eyes, head, shoulders and hands and any combinations thereof as to the orientation of the mobile device, a driver is not likely to tilt the mobile device flat against her lap when using the device. Therefore, if a user uses the mobile device with the screen facing up, the user is less likely the driver; Hill, [0020] discloses the preferred embodiment also adopts the orientation of the mobile device to detect by the user interaction analyzer whether the user is a driver and if the screen is facing away from the direction of travel and tilting slightly forward such as being within 45 degrees of vertical angle or any predetermined range of orientation so if a drivers mobile device, the user may be the driver and on the contrary if the user is a passenger the user may tilt her device flat against her lap with the screen facing up). Regarding claim 119, Hill discloses: The non-transitory computer readable medium of claim 114, (see claim 114). wherein processing the one or more visual captures comprises processing the one or more visual captures to determine (a) a head orientation angle in relation to the device and (b) the eye gaze angle in relation to the head orientation (Hill, [0007] discloses one embodiment may identify the user gestures based on images taken by its cameras wherein a rearward facing camera and facial recognition software for identifying when the user’s face tilt changes in a nodding motion, the user interaction analyzer of the embodiment may decide that the user is a driver another embodiment with an eye movement tracking sensor may detect that the user is a driver by identifying an up and down darting motion of the user’s eye and various combinations of eye, head, hand, and shoulder movements and positions may be used for identifying whether the user is driving; Hill, [0006] discloses in addition to the traveling speed, indications of whether a user is driving include the gestures of the user, the objects near the user and the orientation of the mobile device being used and the mobile device may include a user interaction analyzer to detect these driver indications for example user gesture identification consistent with driving may include identifying movements and positions of the user’s eyes, head, shoulders and hands and any combinations thereof as to the orientation of the mobile device, a driver is not likely to tilt the mobile device flat against her lap when using the device. Therefore, if a user uses the mobile device with the screen facing up, the user is less likely the driver). Regarding claim 120, Hill discloses: The non-transitory computer readable medium of claim 114, (see claim 114). wherein computing a likelihood that a user of the device is at least one of a driver or a passenger comprise: computing the likelihood that the user of the device is at least one of a driver or a passenger based on (a) the determined head orientation angle in relation to the device, and (b) an eye gaze angle in relation to the head orientation (Hill, [0007] discloses one embodiment may identify the user gestures based on images taken by its cameras wherein a rearward facing camera and facial recognition software for identifying when the user’s face tilt changes in a nodding motion, the user interaction analyzer of the embodiment may decide that the user is a driver another embodiment with an eye movement tracking sensor may detect that the user is a driver by identifying an up and down darting motion of the user’s eye and various combinations of eye, head, hand, and shoulder movements and positions may be used for identifying whether the user is driving; Hill, [0006] discloses in addition to the traveling speed, indications of whether a user is driving include the gestures of the user, the objects near the user and the orientation of the mobile device being used and the mobile device may include a user interaction analyzer to detect these driver indications for example user gesture identification consistent with driving may include identifying movements and positions of the user’s eyes, head, shoulders and hands and any combinations thereof as to the orientation of the mobile device, a driver is not likely to tilt the mobile device flat against her lap when using the device. Therefore, if a user uses the mobile device with the screen facing up, the user is less likely the driver). Regarding claim 121, Hill discloses: The non-transitory computer readable medium of claim 120, (see claim 120). wherein computing a likelihood that a user of the device is at least one of a driver or a passenger comprises: based on (a) the head orientation angle in relation to the device, (b) an eye gaze angle in relation to the head orientation, and (c) an orientation angle of the device in relation to the ground, computing the likelihood that the user of the device is at least one of a driver or a passenger (Hill, [0007] discloses one embodiment may identify the user gestures based on images taken by its cameras wherein a rearward facing camera and facial recognition software for identifying when the user’s face tilt changes in a nodding motion, the user interaction analyzer of the embodiment may decide that the user is a driver another embodiment with an eye movement tracking sensor may detect that the user is a driver by identifying an up and down darting motion of the user’s eye and various combinations of eye, head, hand, and shoulder movements and positions may be used for identifying whether the user is driving; Hill, [0006] discloses in addition to the traveling speed, indications of whether a user is driving include the gestures of the user, the objects near the user and the orientation of the mobile device being used and the mobile device may include a user interaction analyzer to detect these driver indications for example user gesture identification consistent with driving may include identifying movements and positions of the user’s eyes, head, shoulders and hands and any combinations thereof as to the orientation of the mobile device, a driver is not likely to tilt the mobile device flat against her lap when using the device. Therefore, if a user uses the mobile device with the screen facing up (i.e. orientation angle of the device in relation to the ground), the user is less likely the driver). Claim 115 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hill et al. (US Patent Publication 2015/0031349 herein after referenced as Hill) in view of Miller et al. (US Patent Publication 2014/0203926 herein after referenced as Miller). Regarding claim 115, Hill discloses: The non-transitory computer readable medium of claim 114, (see claim 114). wherein processing the one or more visual captures comprises processing the one or more visual captures to determine whether the head angle is maintained with respect to the device (Hill, [0007] discloses one embodiment may identify the user gestures based on images taken by its cameras wherein a rearward facing camera and facial recognition software for identifying when the user’s face tilt changes in a nodding motion, the user interaction analyzer of the embodiment may decide that the user is a driver another embodiment with an eye movement tracking sensor may detect that the user is a driver by identifying an up and down darting motion of the user’s eye and various combinations of eye, head, hand, and shoulder movements and positions may be used for identifying whether the user is driving). Hill discloses utilizing a camera to determine a head angle and an eye gaze angle but fails to disclose that said determination is performed for a predefined time interval and therefore fails to disclose “processing the one or more visual captures to determine whether the head angle is maintained with respect to the device for at least a defined chronological interval.” In a related field of endeavor, Miller discloses: processing the one or more visual captures to determine whether the head angle is maintained with respect to the device for at least a defined chronological interval (Miller, [0036] discloses the interior camera may capture an image of the driver while in the vehicle and process such an image to determine whether the drivers eyes are closed for a period that exceeds a predetermined amount of time or whether the head of the driver leans to one side or the other to determine an alertness state of the driver. Therefore one of ordinary skill in the art would recognize and find obvious based on the combined teachings of the cited portions, that a determination is performed if a motion such as a head angle is performed for a predetermined amount of time in order to determine the state of the driver). Therefore, at the time before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to modify the invention of Hill to incorporate the teachings of Miller for the purpose of providing the system with a means to determine the state of the user (Miller, [0036]) and one of ordinary skill in the art would recognize that the modification would make the system more dynamic and adaptable by providing the system with various different alternatives in design and functionality, thereby allowing the system to handle a number of various different combination of specific design structure and scenarios and preventing the system from being limited to a single specific design structure and scenario and furthermore, one of ordinary skill in the art would recognize based on the guidelines to rationales supporting a conclusion of obviousness seen on MPEP 2143, that the modification would involve use of a simple substitution of one known element and base device (i.e. performing a process of utilizing an image of a user to determine a head and eye state of the user as taught by Hill) with another known element (i.e. performing a process of utilizing an image of a user to determine a head and eye state of the user, wherein the determination determines whether the head and eye state are maintained for a predetermined amount of time as taught by Miller) to obtain the predictable result of the system performing a process of utilizing an image of a user to determine a head and eye state of the user (i.e. as taught by Hill & Miller) and is dependent upon the specific intended use, design incentives, needs and requirements (i.e. such as due to teachings of a known standard, current technology, conservation of resources, personal preferences, economic considerations, etc.) of the user and the system as has been established in MPEP 2144.04. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL Y MAPA whose telephone number is (571)270-5540. The examiner can normally be reached Monday thru Thursday: 10 AM - 8 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anthony Addy can be reached at (571) 272 - 7795. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL Y MAPA/Primary Examiner, Art Unit 2645
Read full office action

Prosecution Timeline

Dec 10, 2018
Application Filed
Jul 05, 2019
Response after Non-Final Action
Feb 29, 2020
Non-Final Rejection — §103
Sep 04, 2020
Response Filed
Oct 30, 2020
Final Rejection — §103
May 04, 2021
Request for Continued Examination
May 05, 2021
Response after Non-Final Action
May 22, 2021
Non-Final Rejection — §103
Nov 26, 2021
Response Filed
Dec 15, 2021
Final Rejection — §103
Jun 21, 2022
Request for Continued Examination
Jun 24, 2022
Response after Non-Final Action
Jul 02, 2022
Non-Final Rejection — §103
Jan 11, 2023
Response Filed
Jan 28, 2023
Final Rejection — §103
Aug 02, 2023
Request for Continued Examination
Aug 09, 2023
Response after Non-Final Action
Aug 11, 2023
Non-Final Rejection — §103
Feb 20, 2024
Response Filed
Apr 22, 2024
Final Rejection — §103
Oct 28, 2024
Request for Continued Examination
Oct 30, 2024
Response after Non-Final Action
Oct 30, 2024
Non-Final Rejection — §103
May 02, 2025
Response Filed
May 18, 2025
Final Rejection — §103
Nov 21, 2025
Request for Continued Examination
Dec 01, 2025
Response after Non-Final Action
Dec 04, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593282
DATA SENDING METHOD AND FOR SAVING POWER CONSUMPTION AND DATA RECEIVING METHOD FOR SAVING POWER CONSUMPTION
2y 5m to grant Granted Mar 31, 2026
Patent 12587824
5G Stand Alone (SA) Radio Access Network (RAN) with Evolved Packet Core (EPC)
2y 5m to grant Granted Mar 24, 2026
Patent 12587944
MANAGEMENT OF ROUTING
2y 5m to grant Granted Mar 24, 2026
Patent 12574277
DYNAMIC CONTROL OF POWER AMPLIFIER BACK OFF FOR AMPLIFY AND FORWARD REPEATERS
2y 5m to grant Granted Mar 10, 2026
Patent 12574789
WIRELESS COMMUNICATION CONTROL DEVICE, WIRELESS COMMUNICATION DEVICE, AND WIRELESS COMMUNICATION CONTROL METHOD CAPABLE OF REDUCING WASTEFUL COMMUNICATION WHILE PREVENTING CONCENTRATION OF CONNECTIONS ON SAME ACCESS POINT OF WIRELESS NETWORK
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

11-12
Expected OA Rounds
71%
Grant Probability
99%
With Interview (+27.4%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 728 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month