Prosecution Insights
Last updated: April 19, 2026
Application No. 18/985,601

ELECTRONIC DEVICE

Non-Final OA §103
Filed
Dec 18, 2024
Examiner
BLANCHA, JONATHAN M
Art Unit
2623
Tech Center
2600 — Communications
Assignee
Semiconductor Energy Laboratory Co. Ltd.
OA Round
3 (Non-Final)
62%
Grant Probability
Moderate
3-4
OA Rounds
2y 7m
To Grant
71%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
408 granted / 661 resolved
At TC average
Moderate +9% lift
Without
With
+9.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
17 currently pending
Career history
678
Total Applications
across all art units

Statute-Specific Performance

§101
0.3%
-39.7% vs TC avg
§103
69.4%
+29.4% vs TC avg
§102
23.2%
-16.8% vs TC avg
§112
4.9%
-35.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 661 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment filed on 12-01-25 has been entered and fully considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 2 are rejected under 35 U.S.C. 103 as being unpatentable over Ueno et al. (US 2022/0262236) in view of Robbins et al. (US 2016/0085300) and Weber (US 2020/0357183). Regarding claim 1, Miller IV (Fig. 2, 3, 6, and 8) discloses an electronic device comprising: a housing (230, called a “frame” in [0045]); a display device (318); a first pair of cameras (“pair of cameras oriented in front of the user to handle the stereo imaging process 640 and also to capture hand gestures” discussed in [0104]); a second pair of cameras (“two wide-field-of-view machine vision cameras 316 (also referred to as world cameras) can be coupled to the housing 230 to image the environment around the user” discussed in [0053]); a third pair of cameras (“466 may be used to capture images of the eye 410” and “one camera may be utilized for each eye” discussed in [0084], see also “include three pairs of cameras” discussed in [0104]); and a system unit (260), wherein the electronic device is configured to display augmented reality contents in an AR mode (“display 220 can present AR/VR/MR content to a user” discussed in [0045]), wherein the first pair of cameras is configured to capture an image of user's hands movement for gesture operation (used to “capture hand gestures” as discussed above, see [0104]), wherein the second pair of cameras is configured to capture scenery (used to “image the environment around the user” as discussed above, see [0053]), wherein the third pair of cameras is configured to capture an image of a user's right eye and an image of a user's left eye (“capture images of the eye 410” and “one camera may be utilized for each eye” as discussed above, see [0084]), wherein the system unit is configured to perform a first processing based on the image of the first pair of cameras (the images receives in 810 are processed in 820 and “used in determining pose data (e.g., head pose, eye pose, body pose, or hand gestures)” as discussed in [0119]), wherein the system unit is configured to generate image data based on the first processing (in 510, “identify that a particular UI needs to be populated based on a user input (e.g., gesture…”, and then “generate data for the virtual UI” based on that gesture in 520, see [0091]), wherein the display device is configured to display the image data (“display 220 can present AR/VR/MR content to a user” discussed in [0045]). However, Miller IV fails to teach or suggest wherein the system unit is positioned inside the housing, wherein the second pair of cameras has a longer focal length and a narrower angle of view than the first pair of cameras, or wherein in an AR mode, the image data comprises an image imitating a screen of a smartphone or a tablet terminal. Ueno (Fig. 2, 3, and 13) discloses an electronic device comprising: a housing (the housing of terminal 1, seen best in Fig. 13); a display device (26); a first camera (24); a third pair of cameras (“left and right line-of-sight cameras 25” discussed in [0132], seen in Fig. 13); and a system unit (32), wherein the electronic device is configured to display augmented reality contents in an AR mode (“processor 32 generates rendering data of a virtual smartphone to be displayed in AR” discussed in [0080]), wherein the first camera (24) is configured to capture an image of user’s hands movement for gesture operation (eg. “hand gesture actions” as discussed in [0088], while “images captured by the field-of-view camera 24, and acquires the position data of the pedestrian's palm and fingertips” discussed in [0079]); wherein the third pair of cameras (25) is configured to capture an image of a user’s right eye and in image of a user’s left eye (“25 shoots the left and right eyeballs of a user” discussed in [0070]); wherein the system unit is configured to perform a first processing (operation P3) based on the image of the first camera (“processor 32 detects the palm and fingertips of the pedestrian based on the images captured by the field-of-view camera 24” discussed in [0079]), wherein the system unit is configured to generate image data based on the first processing (“processor 32 determines a display position of the virtual smartphone based on the position data of the user's palm” discussed in [0080]), wherein the display device is configured to display the image data (“AR display 26 to display a virtual object overlaid on a real space which can be seen by a user” discussed in [0081]), and wherein in an AR mode, the image data comprises an image imitating a screen of a smartphone or a tablet terminal (a “virtual smartphone” 11 seen in Fig. 2). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Miller IV so in an AR mode, the image data comprises an image imitating a screen of a smartphone or a tablet terminal as taught by Ueno because this provides the user with a familiar user interface in AR or VR environments. However, while Ueno teaches that some of the sensors are inside the housing (such as a line-of-sight camera 25, a direction sensor 27, and a distance measuring sensor 28, seen in Fig. 13), Miller IV and Ueno fail to provide specific details as to the location of the system unit, and so fails to teach or suggest wherein the system unit is specifically positioned inside the housing. Miller IV and Ueno also fail to teach or suggest wherein the second pair of cameras has a longer focal length and narrower angle of view than the first pair of cameras. Robbins (Fig. 2) discloses an electronic device comprising: a housing (115); a display device (120); a camera (113); and a system unit (136), wherein the electronic device is configured to display augmented reality contents in an AR mode (“augmented… reality” discussed in [0033]), wherein the system unit is positioned inside the housing (136 seen inside of 115 in Fig. 2). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Miller IV and Ueno so the system unit is positioned inside the housing as taught by Robbins because this provides protection for the system unit, by allowing the housing to shield it from the environment. However, Miller IV, Ueno, and Robbins still fail to teach or suggest wherein the second pair of cameras has a longer focal length and narrower angle of view than the first pair of cameras. Weber (Fig. 1 and 2) discloses an electronic device comprising: a housing (shown supporting the cameras 120, displays 110, etc. in Fig. 1); a display device (110); first cameras (cameras 120, corresponding to the “one or more cameras 120 may be used to detect gestures performed by the user” as discussed in [0036]); second cameras (second cameras 120, corresponding to the “one or more cameras 120” used to “capture images of the user's environment” as discussed in [0034]); third cameras (“one or more eye/gaze tracking cameras 120 that face toward the user and are used to track the user's eyes” discussed in [0035]); and a system unit (115), wherein the electronic device is configured to display augmented reality contents in an AR mode (as seen in Fig. 2, “view 290 comprises an electronic document 220 that is displayed in the user's field of view by the AR/VR headset” discussed in [0050]), wherein the first cameras are configured to capture an image of user’s hands movement for gesture operation (“the captured images are analyzed to identify potential gestures formed by the user's hand(s)” discussed in [0036]); wherein the second cameras are configured to capture scenery (used to “capture images of the user's environment” as discussed above, see [0034]), wherein the display device is configured to display the image data (110 are used to “reflect light projected by the headset 100 into the user's eyes to implement a virtual view” discussed in [0031]). Additionally, Weber discloses wherein the second camera has a long focal length (the second camera is “a camera with a long focal length (e.g., a zoom lens), which produces an image with higher magnification than the user's vision” as discussed in [0109], which corresponds to a narrower field of view than the user’s field of view, as seen in Fig. 15). Weber also discloses wherein the first camera has a first field of view (“capture photos within the user's field of view” discussed in [0036]) and wherein the second camera has a second field of view (the second camera is “a camera with a telephoto lens may capture a field of view that is much narrower than that of the user” as discussed in [0104]). Additionally, Ueno discloses wherein the angle of view of the first camera is the same as a user’s field of view (24 is “a field-of-view camera for capturing the user's field of view” as discussed in [0031]). Therefore, the combination of Miller IV, Ueno, and Robbins with Weber would provide an electronic device wherein the second pair of cameras has a longer focal length (pairs of cameras taught by Miller IV, while Ueno teaches the first camera has a field of view equal to the user’s field of view, while Weber teaches and “cameras 120 that have different focal lengths” discussed in [0034] and the second camera has a “long” focal length that causes a narrower field of view, and so the first camera would have a comparatively shorter focal length with the wider field of view) and a narrower angle of view than the first pair of cameras (Ueno teaches the first camera has a field of view equal to the user’s field of view, while Weber teaches the second camera has a field of view narrower than the user’s field of view, see [0109] and Fig. 15). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Miller IV, Ueno, and Robbins to include a second pair of cameras that has a longer focal length and narrower angle of view than the first pair of cameras as taught by Weber because this allows the user to see “an image with higher magnification than the user's vision” (see [0109]). Regarding claim 2, Miller IV, Ueno, Robbins, and Weber disclose an electronic device as discussed, and Ueno further discloses wherein a user operates the image data that appears to float in the air (eg. floating in the air above the user’s palm, as seen in Fig. 2) as in the case of operating a smartphone (“based on the position data of as fingertip of the user acquired in the hand detection operation P3, the processor 32 displays a pointer image 12 in AR, the pointer image corresponding to the position of the fingertip with which a user performs an operation as if the user touched a screen of the virtual smartphone” discussed in [0081]). It would have been obvious to one of ordinary skill in the art to combine Miller IV, Ueno, Robbins, and Weber for the same reasons as discussed above. Response to Arguments Applicant’s arguments with respect to claim 1 have been considered but are moot in view of the new grounds of rejection. In view of the amendments, the reference of Miller IV has been added for new grounds of rejection. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN M BLANCHA whose telephone number is (571)270-5890. The examiner can normally be reached Monday to Friday, 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chanh Nguyen can be reached at 5712727772. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JONATHAN M BLANCHA/Primary Examiner, Art Unit 2623
Read full office action

Prosecution Timeline

Dec 18, 2024
Application Filed
Sep 03, 2025
Non-Final Rejection — §103
Dec 01, 2025
Response Filed
Dec 12, 2025
Final Rejection — §103
Mar 16, 2026
Request for Continued Examination
Mar 18, 2026
Non-Final Rejection — §103
Mar 18, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603033
SCANNING IMAGE DATA TO AN ARRAY OF PIXELS AT AN INTERMEDIATE SCAN RATE DURING A TRANSITION BETWEEN DIFFERENT REFRESH RATES
2y 5m to grant Granted Apr 14, 2026
Patent 12603060
Display Device
2y 5m to grant Granted Apr 14, 2026
Patent 12598285
OPTICAL DISPLAY, IMAGE CAPTURING DEVICE AND METHODS WITH VARIABLE DEPTH OF FIELD
2y 5m to grant Granted Apr 07, 2026
Patent 12585121
NEAR-EYE DISPLAY HAVING OVERLAPPING PROJECTOR ASSEMBLIES
2y 5m to grant Granted Mar 24, 2026
Patent 12578801
METHOD AND DEVICE FOR DETECTING AND RESPONDING TO USER INPUT
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
62%
Grant Probability
71%
With Interview (+9.4%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 661 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month