Prosecution Insights
Last updated: April 19, 2026
Application No. 18/608,234

SYSTEM FOR RECOGNIZING GESTURES OF VEHICLE PASSENGER

Non-Final OA §102§103
Filed
Mar 18, 2024
Examiner
LEE, JONATHAN S
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Kia Corporation
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
94%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
493 granted / 585 resolved
+22.3% vs TC avg
Moderate +10% lift
Without
With
+9.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
19 currently pending
Career history
604
Total Applications
across all art units

Statute-Specific Performance

§101
7.8%
-32.2% vs TC avg
§103
41.9%
+1.9% vs TC avg
§102
28.1%
-11.9% vs TC avg
§112
10.3%
-29.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 585 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 14-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Khorsandi et al. (FabriCar: Enriching the User Experience of In-Car Media Interactions with Ubiquitous Vehicle Interiors using E-textile Sensors, July 2023, DIS '23: Proceedings of the 2023 ACM Designing Interactive Systems Conference, Pages 1438-1456), hereinafter “Khorsandi”. Regarding claim 14, Khorsandi teaches: A system for recognizing gestures of a passenger in a vehicle (See the Abstract.), the system comprising: a controller comprising a gesture tracking unit configured to track a gesture movement within a detection target area based on an image captured by an image sensor (See page 1443, right column, section 4.1: “Three cameras were placed to video-record the interactions from the front (for the seat-belt and eye on the road), side view (for the headrest and steering wheel), and back (for the screen interaction) to ensure all interactions were perceived by the interfaces…However, the seat-belt sensor often glitched during the pilot evaluation (mainly due to the limitations of resistive sensing on non-flat surfaces [76]), causing signal disruption when sensors are bent), in which case we employed the Wizard-of-Oz (WoZ) method [25] by remotely controlling the volume timely according to user input on the seat-belt pad (observed by the wizard constantly during experiments on the monitor behind the participant) to maintain a realistic experience.” The system, with the aid of the wizard, tracks positions of a hand on a path of the seat belt webbing.) when the passenger performs a gesture moving along webbing of a seat belt fastened to the passenger (See Fig. 1, hand swipe up/down across seat belt webbing.); and a signal output unit of the controller configured to output an output signal designated for the gesture movement and control a function with respect to the output signal to be performed (See page 1443, right column, section 4.1: “Three cameras were placed to video-record the interactions from the front (for the seat-belt and eye on the road), side view (for the headrest and steering wheel), and back (for the screen interaction) to ensure all interactions were perceived by the interfaces…However, the seat-belt sensor often glitched during the pilot evaluation (mainly due to the limitations of resistive sensing on non-flat surfaces [76]), causing signal disruption when sensors are bent), in which case we employed the Wizard-of-Oz (WoZ) method [25] by remotely controlling the volume timely according to user input on the seat-belt pad (observed by the wizard constantly during experiments on the monitor behind the participant) to maintain a realistic experience.”). Regarding claim 15, Khorsandi teaches: The system of claim 14, further comprising: a target area detection unit configured to detect a detection target area around the webbing; and an object detection unit configured to detect a gesture object located within the detection target area by a gesture, wherein the gesture tracking unit is further configured to track a gesture object moving within the detection target area by a gesture of the passenger (See the tracking by e-textile sensors of a hand swiping up/down the webbing of the seat belt in Fig. 1 and when that fails, tracking by a camera/wizard on page 1443, section 4.1.). Regarding claim 16, Khorsandi teaches: The system of claim 15, wherein: the target area detection unit is further configured to detect the webbing of the seat belt based on a captured image of the passenger and sets the webbing as a detection target area (See page 1443, right column, section 4.1: “Three cameras were placed to video-record the interactions from the front (for the seat-belt”.), and the object detection unit is further configured to detect one of a passenger’s hand moving along the webbing, a belt cover, and a belt clip as a gesture object (See Fig. 1.). Regarding claim 17, Khorsandi teaches: The system of claim 16, wherein the surface of the gesture object is coated with reflective paint (In the case, the “gesture object” is a “belt cover”, see Fig. 2e and page 1441, section 2.2.2: “The first method is the coating method, in which non-conductive thread is coated with metals, galvanic substances or metallic salts to be conductive. Electroless plating [63] and a conductive polymer coating [45, 105] are the common processes of coating.”). Regarding claim 18, Khorsandi teaches: The system of claim 16, wherein the signal output unit is further configured to determine whether there is a designated output signal bundle for a detected hand shape (See page 1443, right column, section 4.1: “Three cameras were placed to video-record the interactions from the front (for the seat-belt and eye on the road), side view (for the headrest and steering wheel), and back (for the screen interaction) to ensure all interactions were perceived by the interfaces…However, the seat-belt sensor often glitched during the pilot evaluation (mainly due to the limitations of resistive sensing on non-flat surfaces [76]), causing signal disruption when sensors are bent), in which case we employed the Wizard-of-Oz (WoZ) method [25] by remotely controlling the volume timely according to user input on the seat-belt pad (observed by the wizard constantly during experiments on the monitor behind the participant) to maintain a realistic experience.” Receiving of the implied input by the wizard that visually detects the hand meets the claimed “designated output signal bundle for a detected hand shape”.). Regarding claim 19, Khorsandi teaches: The system of claim 14, further comprising a gesture sensor provided to be movable along the webbing by a gesture of the passenger, wherein the gesture tracking unit is further configured to track the gesture sensor moving along the webbing when the passenger performs a gesture (See page 1443, section 4.1: “However, the seat-belt sensor often glitched during the pilot evaluation (mainly due to the limitations of resistive sensing on non-fat surfaces [76]), causing signal disruption when sensors are bent)”. This passage implies that there are cases in which the seat-belt sensor does not glitch when the sensor is bent, which meets the claimed “track the gesture sensor moving along the webbing when the passenger performs a gesture”.). Regarding claim 20, Khorsandi teaches: The system of claim 19, wherein the signal output unit is further configured to determine whether there is a designated output signal bundle for a movement pattern of the gesture sensor (See page 1443, section 4.1: “However, the seat-belt sensor often glitched during the pilot evaluation (mainly due to the limitations of resistive sensing on non-fat surfaces [76]), causing signal disruption when sensors are bent)”. If the seat-belt sensor glitches, the system determines there is no “designated output signal bundle for a movement pattern of the gesture sensor” and relies on the wizard, but if the sensor does not glitch, the system determines there is a result in the form of a “designated output signal bundle for a movement pattern of the gesture sensor”.). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 2, 6, 7, 10, and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Aftab et al. (You Have a Point There: Object Selection Inside an Automobile Using Gaze, Head Pose and Finger Pointing, 2020, ICMI ’20, Pages 595-603), hereinafter “Aftab”, in view of Kim et al. (Passengers’ Gesture Recognition Model in Self-driving Vehicles, 2019, IEEE 4th International Conference on Computer and Communication Systems, Pages 239-242), hereinafter “Kim”. Claim 1 is met by the combination of Aftab and Kim, wherein Aftab discloses: A system for recognizing gestures of a [driver] in a vehicle (See the Abstract.), the system comprising: a controller comprising a target area detection unit configured to detect a detection target area based on an image captured by an image sensor (See page 597, left column, section 3.1.1: “The gesture camera, mounted next to the Roof Function Centre of the car, captures hand and finger movements in the 3D space using a Time-of-Flight (ToF) camera. It has a wide Field-of-View so that it covers almost the entire operating zone of the driver. The gesture camera system detects a finger pointing gesture…”. The examiner asserts that the entire “wide field-of-view” meets both the claimed “image” and “detection target area”, since Aftab searches for a gesture within that field-of-view.), the detection target area being located around webbing of a seat belt configured to be fastened to the [driver] (See page 600, Fig. 7, where the field-of-view is set “so that it covers almost the entire operating zone of the driver” (as stated on page 597, left column, section 3.1.1) and consequently is located “around” the depicted seat belt webbing.); a hidden area detection unit of the controller configured to detect an area hidden within the detection target area by a gesture of the [driver] (See page 598, paragraph bridging the left and right columns: “Due to occlusion of the eyes or the finger, there are some frames with missing data. Occlusion of the eyes mainly occurs when the driver looks downward, and therefore, the eyelids occlude the pupils, or when the pointing arm comes in front of the face.”); and a signal output unit of the controller configured to output an output signal designated for the hidden area (See page 598, right column: “To fill the missing data, we use linear interpolation from the two nearest neighbouring frames.”) and control a function with respect to the output signal to be performed (See page 596, left column, first full paragraph: “In order to identify the desired object or Area-of-Interest (AOI), the user may use a finger pointing gesture, as this type of gesture provides a deictic reference to the various real-world objects, as shown in Figure 1. The action to be performed on the selected object may be provided by speech commands, such as, "what is that?" or "close that window".” Based on this passage and Fig. 6 on page 599, Aftab discloses controlling a function to be performed with respect to a combination of interpolated eye pose data (meeting the claimed “output signal”), head pose, finger pose, and speech commands.). Aftab does not explicitly disclose recognizing gestures of a passenger and a detection target area being located around webbing of a seat belt configured to be fastened to the passenger; however, Kim discloses these limitations in Fig. 2 and page 239, right column: “Of the user gestures in vehicles studied previously, physical gestures include gestures to operate vehicle control devices by moving the hands such as turning the wheel steering [7], turning on signal lights [8], honking the car horn [9], taking hands off the wheel [10], and adjusting seat belts [7].” Aftab and Kim together disclose the limitations of claim 1. Kim is directed to a similar field of art (in-vehicle gesture recognition). Therefore, Aftab and Kim are combinable. Modifying the system and method of Aftab by adding the capability of detecting gestures of a passenger (along with the driver) as well as locating a detection target area around webbing of a seat belt configured to be fastened to the passenger, as disclosed by Kim, would yield the expected and predictable result of gesture analysis of all occupants of a vehicle. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Aftab and Kim in this way. Claim 2 is met by the combination of Aftab and Kim, wherein The combination of Aftab and Kim discloses: The system of claim 1, wherein And Aftab further discloses: the target area detection unit is further configured to detect patterns within the detection target area (The examiner asserts the eye pose and finger pose in Fig. 6 on page 599 meets the claimed “patterns within the detection target area”.). Claim 6 is met by the combination of Aftab and Kim, wherein The combination of Aftab and Kim discloses: The system of claim 2, wherein And Aftab further discloses: the hidden area is an area where a pattern hidden by a gesture is located among the patterns (See page 598, paragraph bridging the left and right columns: “Due to occlusion of the eyes or the finger, there are some frames with missing data.”). Claim 7 is met by the combination of Aftab and Kim, wherein The combination of Aftab and Kim discloses: The system of claim 6, wherein And Aftab further discloses: the signal output unit is further configured to output an output signal designated for the hidden pattern (See page 598, right column: “To fill the missing data, we use linear interpolation from the two nearest neighbouring frames.”). Claim 10 is met by the combination of Aftab and Kim, wherein Aftab further discloses: A vehicle comprising the system of claim 1 (See the system of Fig. 6 as part of the car in Fig. 7.). Claim 11 is met by the combination of Aftab and Kim, wherein Aftab discloses: A system for recognizing gestures of a [driver] in a vehicle (See the Abstract.), the system comprising: a controller comprising a target area detection unit configured to detect a detection target area based on an image captured by an image sensor (See page 597, left column, section 3.1.1: “The gesture camera, mounted next to the Roof Function Centre of the car, captures hand and finger movements in the 3D space using a Time-of-Flight (ToF) camera. It has a wide Field-of-View so that it covers almost the entire operating zone of the driver. The gesture camera system detects a finger pointing gesture…”. The examiner asserts that the entire “wide field-of-view” meets both the claimed “image” and “detection target area”, since Aftab searches for a gesture within that field-of-view.), the detection target area being located around webbing of a seat belt fastened to the [driver] (See page 600, Fig. 7, where the field-of-view is set “so that it covers almost the entire operating zone of the driver” (as stated on page 597, left column, section 3.1.1) and consequently is located “around” the depicted seat belt webbing.); a hand detection unit of the controller configured to detect a [driver’s] hand positioned within the detection target area by a gesture when the [driver] performs the gesture (See page 597, left column, section 3.1.1: “The gesture camera system detects a finger pointing gesture…”.); and a signal output unit of the controller configured to output an output signal designated for the position of the hand (See page 597, left column, section 3.1.1: “…and calculates the vector from the tip of the finger to the base of the finger. The 3D coordinates of the fingertip are used as the finger position.”) and control a function with respect to the output signal to be performed (See page 596, left column, first full paragraph: “In order to identify the desired object or Area-of-Interest (AOI), the user may use a finger pointing gesture, as this type of gesture provides a deictic reference to the various real-world objects, as shown in Figure 1. The action to be performed on the selected object may be provided by speech commands, such as, "what is that?" or "close that window".” Based on this passage and Fig. 6 on page 599, Aftab discloses controlling a function to be performed with respect to a combination of interpolated eye pose data, head pose, finger pose (meeting the claimed “output signal”), and speech commands.). Aftab does not explicitly disclose recognizing gestures of a passenger, a detection target area being located around webbing of a seat belt fastened to the passenger, and detecting a passenger’s hand positioned in the detection target area by a gesture when the passenger performs the gesture; however, Kim discloses these limitations in Fig. 2 and page 239, right column: “Of the user gestures in vehicles studied previously, physical gestures include gestures to operate vehicle control devices by moving the hands such as turning the wheel steering [7], turning on signal lights [8], honking the car horn [9], taking hands off the wheel [10], and adjusting seat belts [7].” Aftab and Kim together disclose the limitations of claim 11. Kim is directed to a similar field of art (in-vehicle gesture recognition). Therefore, Aftab and Kim are combinable. Modifying the system and method of Aftab by adding the capability of detecting gestures of a passenger (along with the driver) as well as detecting a detection target area located around webbing of a seat belt fastened to the passenger and detecting a passenger’s hand positioned within the detection target area by a gesture when the passenger performs the gesture, as disclosed by Kim, would yield the expected and predictable result of gesture analysis of all occupants of a vehicle. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Aftab and Kim in this way. Claim(s) 3, 4, 8, and 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Aftab (You Have a Point There: Object Selection Inside an Automobile Using Gaze, Head Pose and Finger Pointing, 2020, ICMI ’20, Pages 595-603) in view of Kim (Passengers’ Gesture Recognition Model in Self-driving Vehicles, 2019, IEEE 4th International Conference on Computer and Communication Systems, Pages 239-242) in view of Mateu-Mateus et al. (A non-contact camera-based method for respiratory rhythm extraction, 2021, Biomedical Signal Processing and Control, Vol. 66, Pages 1-10), hereinafter “Mateu-Mateus”. Claim 3 is met by the combination of Aftab, Kim, and Mateu-Mateus, wherein The combination of Aftab and Kim discloses: The system of claim 2, wherein The combination of Aftab and Kim does not disclose the following; however, Mateu-Mateus discloses: the patterns are printed on a belt cover provided on the webbing or the webbing (The examiner notes that this limitation (in combination with claim 2) is an additional operation performed by the target area detection unit that does not affect any of the operations recited in claim 1. The examiner treats this operation as a branching/independent step. Turning to Mateu-Mateus, see the detection of a printed pattern on a material (serving as the claimed “belt cover”) placed on the webbing of a seat belt in a driver-assistance system in Fig. 2c and page 3, left column, section 2.1.1: “Once the patterns are placed on the subject, as depicted in Fig. 2c, the algorithm can start the detection of the pattern inside the frame, tracking of the obtained features, and the posterior respiratory signal extraction.”). Aftab, Kim, and Mateu-Mateus together disclose the limitations of claim 3. Mateu-Mateus is directed to a related field of art (monitoring of a driver to improve safety while driving). Therefore, Aftab, Kim, and Mateu-Mateus are combinable. Modifying the system and method of Aftab and Kim by adding the capability of detecting patterns printed on a belt cover on the webbing, as disclosed by Mateu-Mateus, would yield the expected and predictable result of proof-of-concept of a comprehensive driver interaction and safety system in a vehicle. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Aftab, Kim, and Mateu-Mateus in this way. Claim 4 is met by the combination of Aftab, Kim, and Mateu-Mateus, wherein The combination of Aftab and Kim discloses: The system of claim 2, wherein The combination of Aftab and Kim does not disclose the following; however, Mateu-Mateus discloses: the target area detection unit is further configured to: detect the belt cover provided on the webbing based on a captured image of the passenger (The examiner notes that this limitation (in combination with claim 2) is an additional operation performed by the target area detection unit that does not affect any of the operations recited in claim 1. The examiner treats this operation as a branching/independent step. Turning to Mateu-Mateus, see Fig. 2c and page 3, left column, section 2.1.1: “Once the patterns are placed on the subject, as depicted in Fig. 2c, the algorithm can start the detection of the pattern inside the frame, tracking of the obtained features, and the posterior respiratory signal extraction.”), set a cover area including the belt cover as a detection target area, and detect patterns provided on the belt cover within the set cover area (See Fig. 2c and page 3, left column, section 2.1.1: “Once the patterns are placed on the subject, as depicted in Fig. 2c, the algorithm can start the detection of the pattern inside the frame, tracking of the obtained features, and the posterior respiratory signal extraction.”). See the motivation to combine in the treatment of claim 3. Claim 8 is met by the combination of Aftab, Kim, and Mateu-Mateus, wherein The combination of Aftab and Kim discloses: The system of claim 1, wherein The combination of Aftab and Kim does not disclose the following; however, Mateu-Mateus discloses: the target area detection unit is further configured to: detect the belt cover provided on the webbing based on a captured image of the passenger (The examiner notes that this limitation is an additional operation performed by the target area detection unit that does not affect any of the operations recited in claim 1. The examiner treats this operation as a branching/independent step. Turning to Mateu-Mateus, see Fig. 2c and page 3, left column, section 2.1.1: “Once the patterns are placed on the subject, as depicted in Fig. 2c, the algorithm can start the detection of the pattern inside the frame, tracking of the obtained features, and the posterior respiratory signal extraction.”), set a cover area including the belt cover as a detection target area, segment the set cover area, and detect the segmented areas (See Fig. 2c and page 3, left column, section 2.1.1: “Once the patterns are placed on the subject, as depicted in Fig. 2c, the algorithm can start the detection of the pattern inside the frame, tracking of the obtained features, and the posterior respiratory signal extraction.”). See the motivation to combine in the treatment of claim 3. Claim 9 is met by the combination of Aftab, Kim, and Mateu-Mateus, wherein The combination of Aftab, Kim, and Mateu-Mateus discloses: The system of claim 8, further comprising And Aftab further discloses: a hand detection unit configured to detect a [driver’s] hand positioned within the detection target area by a gesture, wherein the hidden area is a segmented area hidden by the passenger’s hand (See page 598, paragraph bridging the left and right columns: “Due to occlusion of the eyes or the finger, there are some frames with missing data. Occlusion of the eyes mainly occurs when the driver looks downward, and therefore, the eyelids occlude the pupils, or when the pointing arm comes in front of the face.” A driver’s hand is understood to be found when detecting the gesture which occludes a hidden area of eyes.). Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Aftab (You Have a Point There: Object Selection Inside an Automobile Using Gaze, Head Pose and Finger Pointing, 2020, ICMI ’20, Pages 595-603) in view of Kim (Passengers’ Gesture Recognition Model in Self-driving Vehicles, 2019, IEEE 4th International Conference on Computer and Communication Systems, Pages 239-242) in view of Yi et al. (Safety Belt Wearing Detection Algorithm Based on Human Joint Points, 2021, IEEE International Conference on Consumer Electronics and Computer Engineering, Pages 538-541), hereinafter “Yi”. Claim 5 is met by the combination of Aftab, Kim, and Yi, wherein The combination of Aftab and Kim discloses: The system of claim 2, wherein The combination of Aftab and Kim does not disclose the following; however, Yi discloses: the target area detection unit is further configured to: detect an upper body of the passenger based on a captured image of the passenger, set an upper body area as a detection target area (The examiner notes that this limitation is an additional operation performed by the target area detection unit that does not affect any of the operations recited in claim 1. The examiner treats this operation as a branching/independent step. Turning to Yi, see detection of relevant joint points of the upper body in Fig. 1 and page 539, left column: “Due to the constraints of the driver's driving position, the camera placed on the upper side of the windshield can only capture the driver's upper body, and the key parts of the seat belt wearing condition detection are also concentrated on the human upper body. In order to reduce the amount of calculation and reduce the time cost, irrelevant joint points are deleted, and only the left and right shoulders and left and right hip joints of the more representative driver can be tested.”), and detect patterns provided on the webbing within the set upper body area (See Figs. 3-4 and page 540, section III.B, detection of seat belt feature vector (serving as the claimed “patterns provided on the webbing”) within the upper body area bounded in red.). Aftab, Kim, and Yi together disclose the limitations of claim 5. Yi is directed to a related field of art (monitoring of a driver to improve safety while driving). Therefore, Aftab, Kim, and Yi are combinable. Modifying the system and method of Aftab and Kim by adding the capability of “detect an upper body of the passenger based on a captured image of the passenger, set an upper body area as a detection target area and detect patterns provided on the webbing within the set upper body area”, as disclosed by Yi, would yield the expected and predictable result of proof-of-concept of a comprehensive driver interaction and safety system in a vehicle. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Aftab, Kim, and Yi in this way. Claim(s) 12 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Aftab (You Have a Point There: Object Selection Inside an Automobile Using Gaze, Head Pose and Finger Pointing, 2020, ICMI ’20, Pages 595-603) in view of Kim (Passengers’ Gesture Recognition Model in Self-driving Vehicles, 2019, IEEE 4th International Conference on Computer and Communication Systems, Pages 239-242) in view of Yi (Safety Belt Wearing Detection Algorithm Based on Human Joint Points, 2021, IEEE International Conference on Consumer Electronics and Computer Engineering, Pages 538-541) in view of Khorsandi (FabriCar: Enriching the User Experience of In-Car Media Interactions with Ubiquitous Vehicle Interiors using E-textile Sensors, July 2023, DIS '23: Proceedings of the 2023 ACM Designing Interactive Systems Conference, Pages 1438-1456). Claim 12 is met by the combination of Aftab, Kim, Yi, and Khorsandi, wherein The combination of Aftab and Kim discloses: The system of claim 11, wherein: The combination of Aftab and Kim does not disclose the following; however, Yi teaches: the target area detection unit is further configured to detect an upper body of the passenger and sets an upper body area as a detection target area (The examiner notes that this limitation is an additional operation performed by the target area detection unit that does not affect any of the operations recited in claim 1. The examiner treats this operation as a branching/independent step. Turning to Yi, see detection of relevant joint points of the upper body in Fig. 1 and page 539, left column: “Due to the constraints of the driver's driving position, the camera placed on the upper side of the windshield can only capture the driver's upper body, and the key parts of the seat belt wearing condition detection are also concentrated on the human upper body. In order to reduce the amount of calculation and reduce the time cost, irrelevant joint points are deleted, and only the left and right shoulders and left and right hip joints of the more representative driver can be tested.”), Aftab, Kim, and Yi together partly disclose the limitations of claim 12. Yi is directed to a related field of art (monitoring of a driver to improve safety while driving). Therefore, Aftab, Kim, and Yi are combinable. Modifying the system and method of Aftab and Kim by adding the capability of “detect an upper body of the passenger based on a captured image of the passenger and sets an upper body area as a detection target area”, as disclosed by Yi, would yield the expected and predictable result of proof-of-concept of a comprehensive driver interaction and safety system in a vehicle. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Aftab, Kim, and Yi in this way. The combination of Aftab, Kim, and Yi does not disclose the following; however, Khorsandi teaches: and the hand detection unit is further configured to detect a position of a hand on a path of the webbing located within the upper body area (The examiner notes that this limitation is an additional operation performed by the hand detection unit that does not affect any of the operations recited in claim 1. The examiner treats this operation as a branching/independent step. Turning to Khorsandi, see Fig. 1 and then page 1443, right column, section 4.1: “Three cameras were placed to video-record the interactions from the front (for the seat-belt and eye on the road), side view (for the headrest and steering wheel), and back (for the screen interaction) to ensure all interactions were perceived by the interfaces…However, the seat-belt sensor often glitched during the pilot evaluation (mainly due to the limitations of resistive sensing on non-flat surfaces [76]), causing signal disruption when sensors are bent), in which case we employed the Wizard-of-Oz (WoZ) method [25] by remotely controlling the volume timely according to user input on the seat-belt pad (observed by the wizard constantly during experiments on the monitor behind the participant) to maintain a realistic experience.” The system, with the aid of the wizard, observes positions of a hand on a path of the seat belt webbing.). Aftab, Kim, Yi, and Khorsandi together disclose the limitations of claim 12. Khorsandi is directed to a similar field of art (improved user experience of in-car media interactions). Therefore, Aftab, Kim, Yi, and Khorsandi are combinable. Modifying the system and method of Aftab, Kim, and Yi by adding the capability of detecting “a position of a hand on a path of the webbing located within the upper body area”, as taught by Khorsandi, would yield the expected and predictable result of proof-of-concept of a comprehensive driver interaction and safety system in a vehicle. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Aftab, Kim, Yi, and Khorsandi in this way. Claim 13 is met by the combination of Aftab, Kim, Yi, and Khorsandi, wherein The combination of Aftab, Kim, Yi, and Khorsandi discloses: The system of claim 12, wherein And Yi further discloses: the signal output unit is further configured to: secure a classification value of the hand position by comparing coordinates of the detected hand position with designated coordinates of the upper body area, and output an output signal designated for the secured classification value (See page 540, left column, section B.2: “Since the driver holding the steering wheel with both hands while driving the vehicle, which may cause the seat belt connection in the two-dimensional image to be blocked and interrupted by the hand. It is necessary to add the automatic connection of the safety belt slash at the break in the detection network.” Detection of the break within the upper body bounded in red meets the claimed “comparing coordinates of the detected hand position with designated coordinates of the upper body area”. The addition of the “automatic connection” meets the claimed “output an output signal designated for the secured classification value”.). See the motivation to combine in the treatment of claim 12. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN S LEE whose telephone number is (571)272-1981. The examiner can normally be reached 11:30 AM - 7:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571)270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Jonathan S Lee/Primary Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Mar 18, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602807
METHOD FOR SUBPIXEL DISPARITY CALCULATION
2y 5m to grant Granted Apr 14, 2026
Patent 12602785
TRAINING A MACHINE LEARNING MODEL TO ASSESS EMBRYO CHARACTERISTICS FROM VIDEO IMAGE DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12597108
METHOD AND APPARATUS TO PERFORM A WIRELINE CABLE INSPECTION
2y 5m to grant Granted Apr 07, 2026
Patent 12597110
IMAGE RECOGNITION METHOD, APPARATUS AND DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12584727
DIMENSION MEASUREMENT METHOD AND DIMENSION MEASUREMENT DEVICE
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
94%
With Interview (+9.5%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 585 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month