Prosecution Insights
Last updated: April 19, 2026
Application No. 19/019,568

ELECTRONIC MIRROR APPARATUS

Non-Final OA §103
Filed
Jan 14, 2025
Examiner
TEITELBAUM, MICHAEL E
Art Unit
2422
Tech Center
2400 — Computer Networks
Assignee
Jvckenwood Corporation
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
93%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
683 granted / 870 resolved
+20.5% vs TC avg
Moderate +14% lift
Without
With
+14.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
39 currently pending
Career history
909
Total Applications
across all art units

Statute-Specific Performance

§101
2.1%
-37.9% vs TC avg
§103
62.4%
+22.4% vs TC avg
§102
16.8%
-23.2% vs TC avg
§112
10.9%
-29.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 870 resolved cases

Office Action

§103
DETAILED ACTION Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bostrom et al. US 2017/0347067 hereinafter referred to as Bostrom In view of Toyoshima JP 2015-067254 hereinafter referred to as Toyoshima in view of Yamazaki JP 2004-306779 hereinafter referred to as Yamazaki in view of Matsuyama et al. US 2018/0065558 hereinafter referred to as Matsuyama. In regards to claim 1, Bostrom teaches: “An electronic mirror apparatus comprising: a first imaging unit that captures an image of a scene behind a vehicle” Bostrom paragraph [0018] teaches the plurality of imagers 14 may further comprise a second imager 14b configured to capture a second image data 16b corresponding to a scene directed to an exterior region 20 proximate the vehicle 10. In an exemplary embodiment, exterior region 20 may correspond to a rearward directed field of view 21 relative to a forward direction 22 of the vehicle 10. “a second imaging unit that captures an image of a rear seat in the vehicle” Bostrom paragraph [0018] teaches the plurality of imagers 14 may include a first imager 14a configured to capture a first image data 16a corresponding to an interior field of view 17 of a passenger compartment 18 of the vehicle 10. Bostrom paragraph [0022] teaches in some embodiments, the controller 40 may process the first image data 16a from the first imager 14a to identify a display-prompt (e.g., a gesture, motion, speech, or other form of input or stimulus) of a passenger 34 of the vehicle 10. The Examiner interprets that from Figure 2 the passenger 34 is located in a rear seat of the vehicle. “an electronic mirror display unit that displays a rear image of the vehicle captured by the first imaging unit; Bostrom paragraph [0019] teaches the display device 24 may correspond to a rearview display device 26 configured to demonstrate the second image data 16b of the rearward directed view 21. In this configuration, the display system 12 may be operable to display a series of images captured corresponding to scenes behind the vehicle 10. “an image recognition unit that detects … and recognizes a [face] of a person or an animal in the rear seat in the image captured by the second imaging unit” Bostrom paragraph [0023] teaches the controller 40 may be operable to crop the image data 16a focusing on the region of interest. The region of interest may be identified by the controller 40 based on a facial recognition process applied to the image data 16a thereby identifying a facial region 44 of the passenger 34 or occupant. “[displaying] a rear vehicle in the image captured by the first imaging unit and [a face] of a person or an animal in the rear seat in the image captured by the second imaging unit” Bostrom Figure 3. “and a display control unit that superimposes an image representing the [face] on the rear image and displays a resultant image on the electronic mirror display unit and skips superimposition of the image representing the [face]” Bostrom Figure 3. Bostrom paragraph [0021] teaches the controller 40 may be configured to selectively display the image data 16a and/or 16b in response to the one or more input signals or operating conditions of the vehicle 10. The Examiner interprets that selectively displaying would provide for skipping the display of face images. Bostrom does not explicitly teach: “facial expression” and “image representing the facial expression” Toyoshima teaches in embodiment 1 the facial expression recognition operation is performed from the images captured by the cameras 6 and 8 provided corresponding to the display units 3 and 4 installed on the rear seat side in the vehicle. Since the icon for the specific facial expression recognized by the facial expression recognition operation is created and then the icon is output to the display unit 1 on the front seat side, the driver can, for example, Facial expressions can also be confirmed safely, and as a result, it can contribute to safe driving. It would have been obvious for a person with ordinary skill in the art before the invention was effectively filed to have modified Bostrom in view of Toyoshima to have included the features of “facial expression” because the mother can feel safe and concentrate on driving (Toyoshima embodiment 1). Bostrom/Toyoshima do not explicitly teach: “image recognition unit that detects a rear vehicle” and “and [moves] superimposition of the image … when the rear vehicle is detected by the image recognition unit in the rear image captured by the first imaging unit and when overlapping with an area in the rear image showing the rear vehicle cannot be avoided” However, image recognition for vehicles is quite common. Yamazaki teaches in paragraph [0042] If a following vehicle has been detected in the central region 31B (S252; YES), the process proceeds to S254, and it is determined whether a following vehicle has been detected in the left region 31A. Yamazaki paragraph [0043] teaches If the following vehicle is not detected in the left area 31A (S254; NO), a setting to display information of the following vehicle on the left side of the rearview mirror 14 is performed (S255), and the process proceeds to S259. If the following vehicle is detected in the left area 31A (S254; YES), the process proceeds to S256, and it is determined whether the following vehicle is detected in the right area 31C. Yamazaki paragraph [0044] teaches If the following vehicle is not detected in the right area 31C (S256; NO), a setting to display information of the following vehicle on the right portion of the rearview mirror 14 is performed (S257), and the process proceeds to S259. As described above, in S252 to S257, an area in which the rear vehicle is not shown in the rear-view mirror 14 is selected, and the setting of displaying the information of the subsequent vehicle in the area is performed. If there is no area in which the following vehicle is not shown (S256; NO), the setting of moving the information of the following vehicle information to the upper end of the rearview mirror 14 and displaying the information is performed (S258). It would have been obvious for a person with ordinary skill in the art before the invention was effectively filed to have modified Bostrom/Toyoshima in view of Yamazaki to have included the features of “[an image recognition unit] that detects a rear vehicle” and “and [moves] superimposition of the image … when the rear vehicle is detected by the image recognition unit in the rear image captured by the first imaging unit and when overlapping with an area in the rear image showing the rear vehicle cannot be avoided” because the position where the information of the succeeding vehicle is displayed can be prevented from overlapping, and only the information required by the driver 21 can be displayed. Bostrom/Toyoshima/Yamazaki do not explicitly teach: “and skips [superimposition of the image of the face] and the rear image” Matsuyama paragraph [0079] teaches In a case where a moving image captured by the rear-view camera 161 of a view behind the vehicle 300 is displayed by the display 130 as illustrated in (b) of FIG. 3, the situation behind the vehicle 300 can be displayed without being obstructed by a person or thing present in the vehicle 300. It would have been obvious for a person with ordinary skill in the art before the invention was effectively filed to have modified Bostrom/Toyoshima/Yamazaki in view of Matsuyama to have included the features of “and skips [superimposition of the image of the face] and the rear image” it is necessary to ensure the safety of the driver who operates the operation device (Matsuyama [0016]). In regards to claim 2, Bostrom/Toyoshima/Yamazaki/Matsuyama teach all the limitations of claim 1 and further teach: “wherein the display control unit selects an [image corresponding to the face] and superimposed the … image selected on the rear image” Bostrom Figure 3. Bostrom/Toyoshima/Yamazaki/Matsuyama further teach: “icon image corresponding to the facial expression” and “superimposed the icon image” Toyoshima teaches in embodiment 1 the facial expression recognition operation is performed from the images captured by the cameras 6 and 8 provided corresponding to the display units 3 and 4 installed on the rear seat side in the vehicle. Since the icon for the specific facial expression recognized by the facial expression recognition operation is created and then the icon is output to the display unit 1 on the front seat side, the driver can, for example, Facial expressions can also be confirmed safely, and as a result, it can contribute to safe driving. Toyoshima teaches in embodiment 1 That is, since the mother is driving in a state where the map information is displayed on the display unit 1, the map is read in (S2 in FIG. 2) in a part of this map (for example, the upper right that does not disturb the map). An icon is displayed. It would have been obvious for a person with ordinary skill in the art before the invention was effectively filed to have modified Bostrom in view of Toyoshima to have included the features of “icon image corresponding to the facial expression” and “superimposed the icon image” because the mother can feel safe and concentrate on driving (Toyoshima embodiment 1). In regards to claim 3, Bostrom/Toyoshima/Yamazaki/Matsuyama teach all the limitations of claim 1 and further teach: “wherein the display control unit superimposes an image representing a designated type of facial expression on the rear image” Toyoshima teaches in embodiment 1 the facial expression recognition operation is performed from the images captured by the cameras 6 and 8 provided corresponding to the display units 3 and 4 installed on the rear seat side in the vehicle. Since the icon for the specific facial expression recognized by the facial expression recognition operation is created and then the icon is output to the display unit 1. Toyoshima teaches in embodiment 1 Further, the sleeping face icon is output in (S3 in FIG. 2) in the state in which the voice and the image for the child are output from the display units 3 and 4 and the speakers 5 and 7 on the rear seat side as described above. Then, a smiley face icon is displayed on the display unit 1 (S8 in FIG. 2). It would have been obvious for a person with ordinary skill in the art before the invention was effectively filed to have modified Bostrom in view of Toyoshima to have included the features of “wherein the display control unit superimposes an image representing a designated type of facial expression on the rear image” because the mother can feel safe and concentrate on driving (Toyoshima embodiment 1). In regards to claim 4, Bostrom/Toyoshima/Yamazaki/Matsuyama teach all the limitations of claim 1 and further teach: “wherein the display control unit cuts out an image of an area including a face of the person or the animal from the image captured by the second imaging unit and superimposes the image of the area cut out to include the face on the rear image” Bostrom Figure 3. In regards to claim 5, Bostrom/Toyoshima/Yamazaki/Matsuyama teach all the limitations of claim 1 and further teach: “[image] representing the facial expression” Toyoshima teaches in embodiment 1 That is, since the mother is driving in a state where the map information is displayed on the display unit 1, the map is read in (S2 in FIG. 2) in a part of this map (for example, the upper right that does not disturb the map). An icon is displayed. Toyoshima teaches in embodiment 1 Further, the sleeping face icon is output in (S3 in FIG. 2) in the state in which the voice and the image for the child are output from the display units 3 and 4 and the speakers 5 and 7 on the rear seat side as described above. Then, a smiley face icon is displayed on the display unit 1 (S8 in FIG. 2). It would have been obvious for a person with ordinary skill in the art before the invention was effectively filed to have modified Bostrom in view of Toyoshima to have included the features of “[image] representing the facial expression”” because the mother can feel safe and concentrate on driving (Toyoshima embodiment 1). Bostrom/Toyoshima/Yamazaki/Matsuyama further teach: “wherein the display control unit superimposes the image … on an area in the rear image outside an area showing the rear vehicle, when the rear vehicle is detected by the image recognition unit in the rear image captured by the first imaging unit” Yamazaki teaches in paragraph [0042] If a following vehicle has been detected in the central region 31B (S252; YES), the process proceeds to S254, and it is determined whether a following vehicle has been detected in the left region 31A. Yamazaki paragraph [0043] teaches If the following vehicle is not detected in the left area 31A (S254; NO), a setting to display information of the following vehicle on the left side of the rearview mirror 14 is performed (S255), and the process proceeds to S259. If the following vehicle is detected in the left area 31A (S254; YES), the process proceeds to S256, and it is determined whether the following vehicle is detected in the right area 31C. Yamazaki paragraph [0044] teaches If the following vehicle is not detected in the right area 31C (S256; NO), a setting to display information of the following vehicle on the right portion of the rearview mirror 14 is performed (S257), and the process proceeds to S259. As described above, in S252 to S257, an area in which the rear vehicle is not shown in the rear-view mirror 14 is selected, and the setting of displaying the information of the subsequent vehicle in the area is performed. If there is no area in which the following vehicle is not shown (S256; NO), the setting of moving the information of the following vehicle information to the upper end of the rearview mirror 14 and displaying the information is performed (S258). It would have been obvious for a person with ordinary skill in the art before the invention was effectively filed to have modified Bostrom/Toyoshima in view of Yamazaki to have included the features of “wherein the display control unit superimposes the image … on an area in the rear image outside an area showing the rear vehicle, when the rear vehicle is detected by the image recognition unit in the rear image captured by the first imaging unit” because the position where the information of the succeeding vehicle is displayed can be prevented from overlapping, and only the information required by the driver 21 can be displayed. In regards to claim 6, Bostrom/Toyoshima/Yamazaki/Matsuyama teach all the limitations of claim 1 and further teach: “wherein the display control unit superimposes the image … on the rear image until a selected display time elapses” Bostrom paragraph [0028] teaches the controller 40 may terminate the display of the image data 16a of the facial region 44 of the passenger 34 on the display device 24 after a predetermined period of time elapses following the completion of the speech of the occupant. Bostrom/Toyoshima/Yamazaki/Matsuyama further teach: “representing the facial expression” Toyoshima teaches in embodiment 1 That is, since the mother is driving in a state where the map information is displayed on the display unit 1, the map is read in (S2 in FIG. 2) in a part of this map (for example, the upper right that does not disturb the map). An icon is displayed. Toyoshima teaches in embodiment 1 Further, the sleeping face icon is output in (S3 in FIG. 2) in the state in which the voice and the image for the child are output from the display units 3 and 4 and the speakers 5 and 7 on the rear seat side as described above. Then, a smiley face icon is displayed on the display unit 1 (S8 in FIG. 2). It would have been obvious for a person with ordinary skill in the art before the invention was effectively filed to have modified Bostrom in view of Toyoshima to have included the features of “representing the facial expression”” because the mother can feel safe and concentrate on driving (Toyoshima embodiment 1). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL E TEITELBAUM, Ph.D. whose telephone number is (571)270-5996. The examiner can normally be reached 8:30AM-5:00PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Miller can be reached at 571-272-7353. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL E TEITELBAUM, Ph.D./ Primary Examiner, Art Unit 2422
Read full office action

Prosecution Timeline

Jan 14, 2025
Application Filed
Jan 16, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603975
MICRO LED PROJECTOR
2y 5m to grant Granted Apr 14, 2026
Patent 12585205
LOW NUMERICAL APERTURE ALIGNMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12576803
Method for Controlling Two or More Comfort Functions of a Vehicle and Vehicle Device
2y 5m to grant Granted Mar 17, 2026
Patent 12575294
SWITCHABLE TRANSPARENT ORGANIC LIGHT-EMITTING DIODE DISPLAYS WITH AN INTEGRATED ELECTRONIC INK LAYER
2y 5m to grant Granted Mar 10, 2026
Patent 12574606
Remote Control Having Hotkeys with Dynamically Assigned Functions
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
93%
With Interview (+14.2%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 870 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month