Prosecution Insights
Last updated: April 18, 2026
Application No. 18/095,117

Electronic Device and Method For Operating The Electronic Device

Final Rejection §103
Filed
Jan 10, 2023
Examiner
WRIGHT, ANDREW RUSSELL
Art Unit
2872
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Semiconductor Energy Laboratory Co. Ltd.
OA Round
2 (Final)
55%
Grant Probability
Moderate
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 55% of resolved cases
55%
Career Allow Rate
11 granted / 20 resolved
-13.0% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
35 currently pending
Career history
55
Total Applications
across all art units

Statute-Specific Performance

§103
68.0%
+28.0% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 20 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Claims 1 and 11 are amended, claims 15 and 16 are cancelled and claims 17-20 are new. Response to Arguments Applicant's arguments filed 12/16/2025 have been fully considered but they are not persuasive. First Applicant argues on page 9 that it is not proper to reject claims 1 and 11 with the combination of Fang, Mitsunari and Ogino, because Ogino discloses an apparatus for manufacturing displays instead of a display and there is no motivation to combine Ogino with Fang and Mitsunari. Examiner disagrees and states that the combination of Fang and Mitsunari disclose the limitations regarding the display device and Ogino is used to teach the limitations regarding the autofocus and that paragraph [0002] of the current application discloses the present invention can include a manufacturing method. One would have been motivated to use the size of the beams to focus the lens because Ogino teaches that the detection of the spot size can be used for auto focus control (Ogino paragraphs [0070]). Second Applicant argues on page 10 that it is not proper to reject claims 1 and 11 with the combination of Fang, Mitsunari and Ogino, because Ogino the focus detector 29 as the light receiving device is a separate device from the display device. Examiner disagrees and has cited Mitsunari to disclose the light receiving device in the display device, “wherein the display device (display panel 200 fig. 2A) comprises a light-receiving device (optical sensor 220 receives light reflected from the eyes paragraph [0029] of translation)” and only relies on Ogino to teach the limitations regarding the autofocus. Third applicant argues on page 11 that Ogino not disclose the features of claims 1 and 11 including the light emitted from the light emitting device being detected by the light receiving device because Ogino discloses a separate light source, detection unit and target object. Examiner disagrees and has cited Mitsunari to disclose the light receiving device an light emitting device in the display device, “wherein the display device (display panel 200 fig. 2A) comprises a light-emitting device (display panel 210 A emits light paragraph [0029] of translation) and a light-receiving device (optical sensor 220 receives light reflected from the eyes paragraph [0029] of translation).” and only relies on Ogino to teach the limitations regarding the autofocus. In response to applicant's argument that that the combination of Fang, Mitsunari and Ogino does not disclose the features of claims 1 and 11 to arrive at that same device as the claimed invention because, the light receiving device and light emitting device of Ogiono are not interchangeable equals, the test for obviousness is not whether the features of a secondary reference may be bodily incorporated into the structure of the primary reference; nor is it that the claimed invention must be expressly suggested in any one or all of the references. Rather, the test is what the combined teachings of the references would have suggested to those of ordinary skill in the art. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1-4 and 10-14 are rejected under 35 U.S.C. 103 as being unpatentable over Fang et al. (US 11009713 B1) in view of Mitsunari (JP 2022170475 A) and Ogino et al. (US 20110140007 Al). Regarding claim 1, Fang discloses in at least figures 1 and 3, an electronic device (illustrating system 100 fig. 1) comprising: a housing comprising an optical device (display device 101 houses display 102, optics block 104 and camera 302 fig. 3), wherein the optical device (display 102, optics block 104 and camera 302 fig. 3) comprises a display device (display 102 fig. 3) and a lens (optics block 104 includes one or more lenses col. 5 lines 54-55), wherein the lens (optics block 104 includes one or more lenses col. 5 lines 54-55) is positioned on a display portion side (light is displayed from display 102 to optics block 104 fig. 3) of the display device (display 102 fig. 3). Fang does not explicitly disclose, wherein the display device comprises a light-emitting device and a light-receiving device, and wherein the housing is configured to detect, with the use of the light-receiving, a spot diameter of first light that is emitted from the light-emitting device and reflected by a detection target wherein the housing is configured to move the lens and detect, with the use of the light receiving device, a spot diameter of second light that is emitted from the light-emitting device and reflected by the detection target, wherein the housing is configured to determine whether the spot diameter of the second light is smaller than the spot diameter of the first light, wherein the housing is configured to further move the lens and detect, with the use of the light receiving device, a spot diameter of third light that is emitted from the light-emitting device and reflected by the detection target in the case where the spot diameter of the second light is smaller than the spot diameter of the first light, wherein the housing is configured to determine whether the spot diameter of the third light is smaller than the spot diameter of the second light, and wherein the housing is configured to move the lens to a position at which the spot diameter of the first light has been detected, in the case where the spot diameter of the second light is larger than the spot diameter of the first light. However, Mitsunari discloses in at least 2A, wherein the display device (display panel 200 fig. 2A) comprises a light-emitting device (display panel 210 A emits light paragraph [0029] of translation) and a light-receiving device (optical sensor 220 receives light reflected from the eyes paragraph [0029] of translation). Fang discloses the claimed invention except that the display panel does not include the light receiving device. Mitsunari shows that the display panel including the light receiving device is an equivalent structure in the art. Therefore, because these two displays were art-recognized equivalents before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to substitute the display with the light receiving device instead of the display and a separate light receiving device, and the results thereof would have been predictable. See MPEP §2144.06 and 2143 (I)(B). Additionally Ogino discloses in at least figure 4, wherein the housing (semiconductor manufacturing device paragraph [0065]) is configured to detect (the object lens 25 is controlled to move in an optical axis direction (Z direction), by utilizing one or more focus error signals based on the reflection light of the laser light passing through the above-described six small core sections 41A to 41F paragraph [0070]), with the use of the light-receiving device (focus detector 29 fig. 4), a spot diameter of first light (laser light spot size while out of focus paragraph [0070]) that is emitted from the light emitting device (the laser light 34-36 is emitted through the waveguide part paragraph [0068]) and reflected by a detection target (the light is reflected by the target object 37 paragraph [0069]), wherein the housing (semiconductor manufacturing device paragraph [0065]) is configured to move the lens and detect (the objective lens 25 can be moved in the z direction paragraph [0070]), with the use of the light-receiving device (focus detector 29 fig. 4), a spot diameter of second light (laser light spot size becomes smaller while focusing on the target paragraph [0070]) that is emitted (the laser light 34-36 is emitted through the waveguide part paragraph [0068]) and reflected by a detection target (the light is reflected by the target object 37 paragraph [0069]), wherein the housing (semiconductor manufacturing device paragraph [0065]) is configured to determine whether the spot diameter of the second light (laser light spot size becomes smaller while focusing on the target paragraph [0070]) is smaller than (the laser light spot size while out of focus is larger than the size of the laser light spot size while focusing on the object paragraph [0070]) the spot diameter of the first light (laser light spot size while out of focus paragraph [0070]), wherein the housing (semiconductor manufacturing device paragraph [0065]) is configured to further move the lens and detect (the objective lens 25 can be moved in the z direction paragraph [0070]), with the use of the light-receiving device (focus detector 29 fig. 4), a spot diameter of third light (the laser spot size is most narrowed in a focusing state paragraph [0069]) that is emitted from the light emitting device (the laser light 34-36 is emitted through the waveguide part paragraph [0068]) and reflected by the detection target (the light is reflected by the target object 37 paragraph [0069]) in the case where the spot diameter of the second light (laser light spot size becomes smaller while focusing on the target paragraph [0070]) is smaller than (the laser light spot size while out of focus is larger than the size of the laser light spot size while focusing on the object paragraph [0070]) the spot diameter of the first light (laser light spot size while out of focus paragraph [0070]), wherein the housing (semiconductor manufacturing device paragraph [0065]) is configured to determine whether the spot diameter of the third light (the laser spot size is most narrowed in a focusing state paragraph [0069]) is smaller than (the laser spot size is most narrowed in a focusing state is smaller than the laser light spot size that becomes smaller while focusing on the target paragraphs [0069-0070]) the spot diameter of the second light (laser light spot size becomes smaller while focusing on the target paragraph [0070]), and wherein the housing (semiconductor manufacturing device paragraph [0065]) is configured to move the lens (the object lens 25 is controlled to move in an optical axis direction (Z direction), by utilizing one or more focus error signals paragraph [0070]) to a position at which the spot diameter of the first light has been detected (laser light spot size while out of focus paragraph [0070]), in the case where the spot diameter of the second light (laser light spot size becomes smaller while focusing on the target paragraph [0070]) is larger than (when the laser spot is larger in the out of focus state the object lens 25 is controlled to move in an optical axis direction (Z direction), by utilizing one or more focus error signals and becomes smaller while moving to the in focus state paragraph [0070]) the spot diameter of the first light (laser light spot size while out of focus paragraph [0070]). Ogino further teaches (paragraph [0070]): " the semiconductor manufacturing apparatus can perform autofocus control, in which the object lens 25 is controlled to move in an optical axis direction (Z direction), by utilizing one or more focus error signals based on the reflection light of the laser light passing through the above-described six small core sections 41A to 41F. By performing the autofocus control, a stable spot size can be kept against the fluctuation (in the Z direction) of the object due to disturbance. As the core used for the autofocus control, the small core section 41B or 41E, which is placed at a center part in the longitudinal direction of the core section 39 of the waveguide unit passing the laser for modification, is desirable" Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to detect different sizes of light beams to focus the objective lens as taught by Ogino in the electronic device of Fang. One would have been motivated to use the size of the beams to focus the lens because Ogino teaches that the detection of the spot size can be used for auto focus control (Ogino paragraphs [0070]). Regarding claim 2, the combination of Fang, Mitsunari and Ogino discloses all the limitations of claim 1 and Fang further discloses, wherein the light-emitting device is configured to emit infrared light (infrared light is emitted within display device 101 col. 6 lines 58-59). Regarding claim 3, the combination of Fang, Mitsunari and Ogino discloses all the limitations of claim 2 and Fang further discloses, wherein the light-receiving device (camera 302 fig. 3) is configured to detect infrared light (the reflected light is received or detected by the camera and analyzed to extract eye rotation information from changes in the infrared light reflected by each eye col. 6 lines 60-62). Regarding claim 4, the combination of Fang, Mitsunari and Ogino discloses all the limitations of claim 1 and Fang further discloses, wherein the detection target (camera 302 captures images of the user's eyes col. 11 lines 22-23) is a user's eye (eye 300 fig. 3). Regarding claim 10, the combination of Fang, Mitsunari and Ogino discloses all the limitations of claim 1 and Fang further discloses, wherein the housing (display device 101 houses display 102, optics block 104 and camera 302 fig. 3) is connected to a mounting fixture (a band goes around the users head col. 10 lines 32-33), and wherein the mounting fixture is configured to fix the housing to a user's head (display device 101 is a head mounted display col. 10 line 30). Regarding claim 11, Fang discloses in at least figures 1,3 and 8A-B, a method (method 800 fig. 8A) for operating an electronic device (illustrating system 100 fig. 1), the electronic device (illustrating system 100 fig. 1) comprising an optical device (display device 101 houses display 102, optics block 104 and camera 302 fig. 3) comprising a display device (display 102 fig. 3) and a lens (optics block 104 includes one or more lenses col. 5 lines 54-55), wherein the lens (optics block 104 includes one or more lenses col. 5 lines 54-55) is positioned on a display portion side (light is displayed from display 102 to optics block 104 fig. 3) of the display device (display 102 fig. 3), the method (method 800 fig. 8A) comprising: a first step (step 802 fig. 8A) of displaying an image (the display is configured to project light col. 17 lines 45-46). Fang does not explicitly disclose, wherein the display device comprises a light-emitting device and a light-receiving device, and a second step of detecting, with the use of the light-receiving device, a spot diameter of first light that is emitted from the light-emitting device and reflected by a detection target; third step of moving the lens and detecting, with the use of the light-receiving device, a spot diameter of second light that is emitted from the light-emitting device and reflected by the detection target; a fourth step of determining whether the spot diameter of the second light is smaller than the spot diameter of the first light; a fifth step of further moving the lens and detecting, with the use of the light-receiving device, a spot diameter of third light that is emitted from the light-emitting device and reflected by the detection target in the case where the spot diameter of the second light is smaller the spot diameter of the first light; a sixth step of determining whether the spot diameter of the third light is smaller than the spot diameter of the second light; and a seventh step of moving the lens to a position at which the spot diameter of the first light has been detected, in the case where the spot diameter of the second light is larger than the spot diameter of the first light. However, Mitsunari discloses in at least 2A, wherein the display device (display panel 200 fig. 2A) comprises a light-emitting device (display panel 210 A emits light paragraph [0029] of translation) and a light-receiving device (optical sensor 220 receives light reflected from the eyes paragraph [0029] of translation). Fang discloses the claimed invention except that the display panel does not include the light receiving device. Mitsunari shows that the display panel including the light receiving device is an equivalent structure in the art. Therefore, because these two displays were art-recognized equivalents before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to substitute the display with the light receiving device instead of the display and a separate light receiving device, and the results thereof would have been predictable. See MPEP §2144.06 and 2143 (l)(B). Additionally Ogino discloses in at least figure 4, a second step of detecting (the object lens 25 is controlled to move in an optical axis direction (Z direction), by utilizing one or more focus error signals based on the reflection light of the laser light passing through the above-described six small core sections 41A to 41F paragraph [0070]), with the use of the light-receiving device (focus detector 29 fig. 4), a spot diameter of first light (laser light spot size while out of focus paragraph [0070]) that is emitted from the light-emitting device (the laser light 34-36 is emitted through the waveguide part paragraph [0068]) and reflected by a detection target (the light is reflected by the target object 37 paragraph [0069]); a third step of moving the lens and detecting (the objective lens 25 can be moved in the z direction paragraph [0070]), with the use of the light-receiving device (focus detector 29 fig. 4), a spot diameter of second light (laser light spot size becomes smaller while focusing on the target paragraph [0070]) that is emitted (the laser light 34-36 is emitted through the waveguide part paragraph [0068]) and reflected by a detection target (the light is reflected by the target object 37 paragraph [0069]); a fourth step of determining (laser light spot size becomes smaller while focusing on the target paragraph [0070]) whether the spot diameter of the second light (the laser light spot size while out of focus is larger than the size of the laser light spot size while focusing on the object paragraph [0070]) the spot diameter of the first light (laser light spot size while out of focus paragraph [0070]); a fifth step of further moving the lens and detecting (the objective lens 25 can be moved in the z direction paragraph [0070]), with the use of the light-receiving device (focus detector 29 fig. 4), a spot diameter of third light (the laser spot size is most narrowed in a focusing state paragraph [0069]) that is emitted from the light-emitting device (the laser light 34-36 is emitted through the waveguide part paragraph [0068]) and reflected by the detection target (the light is reflected by the target object 37 paragraph [0069]) in the case where the spot diameter of the second light (laser light spot size becomes smaller while focusing on the target paragraph [0070]) is smaller than (the laser light spot size while out of focus is larger than the size of the laser light spot size while focusing on the object paragraph [0070]) the spot diameter of the first light (laser light spot size while out of focus paragraph [0070]); a sixth step of determining whether the spot diameter of the third light (the laser spot size is most narrowed in a focusing state paragraph [0069]) is smaller than (the laser spot size is most narrowed in a focusing state is smaller than the laser lights pot size that becomes smaller while focusing on the target paragraphs [0069-0070]) the spot diameter of the second light (laser light spot size becomes smaller while focusing on the target paragraph [0070]); and a seventh step of moving the lens (the object lens 25 is controlled to move in an optical axis direction (Z direction), by utilizing one or more focus error signals paragraph [0070]) to a position at which the spot diameter of the first light has been detected (laser light spot size while out of focus paragraph [0070]), in the case where the spot diameter of the second light (laser light spot size becomes smaller while focusing on the target paragraph [0070]) is larger than (when the laser spot is larger in the out of focus state the object lens 25 is controlled to move in an optical axis direction (Z direction), by utilizing one or more focus errors in a Is and becomes smaller while moving to the in focus state paragraph [0070]) the spot diameter of the first light (laser light spot size while out of focus paragraph [0070]). Ogino further teaches (paragraph [0070]): "the semiconductor manufacturing apparatus can perform autofocus control, in which the object lens 25 is controlled to move in an optical I axis direction (Z direction), by utilizing one or more focus error signals based on the reflection light of the laser light passing through the above-described six small core sections 41A to 41F. By performing the autofocus control, a stable spot size can be kept against the fluctuation (in the Z direction) of the object due to disturbance. As the core used for the autofocus control, the small core section 41B or 41E, which is placed at a center part in the longitudinal direction of the core section 39 of the waveguide unit passing the laser for modification, is desirable" Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to detect different sizes of light beams to focus the objective lens as taught by Ogino in the electronic device of Fang. One would have been motivated to use the size of the beams to focus the lens because Ogino teaches that the detection of the spot size can be used for auto focus control (Ogino paragraphs [0070]). Regarding claim 12, the combination of Fang, Mitsunari and Ogino discloses all the limitations of claim 1 and Fang further discloses, wherein, in the first step (step 802 fig. 8A), the light-emitting device emits infrared light while the image is displayed (infrared light is emitted within display device 101 col. 6 lines 58-59). Regarding claim 13, the combination of Fang, Mitsunari and Ogino discloses all the limitations of claim 12 and fang further discloses, wherein the light-receiving device detects infrared light reflected by the detection target (the reflected light is received or detected by the camera and analyzed to extract eye rotation information from changes in the infrared light reflected by each eye col. 6 lines 60-62) in the second step (step 806 determine first reference pas it ion based on pas it ion of the eye using reflected light fig. 8A), the third step (step 824 determining a position of display reference position can be repeated fig. 8B), and the fifth step (step 824 determining a position of display reference position can be repeated fig. 8B). Fang does not disclose, wherein the light-receiving device detects a spot diameter. However Ogino further discloses the light-receiving device (focus detector 29 fig. 4) detects a spot diameter (the focus detector 29 generates focus error signa Is 33A to 33F corresponding to the laser spots of the respective small core sections 41A to 41F paragraph [0069]). Ogino further teaches (paragraph [0070]): "the semiconductor manufacturing apparatus can perform autofocus control, in which the object lens 25 is controlled to move in an optical axis direction (Z direction), by utilizing one or more focus error signals based on the reflection light of the laser light passing through the above-described six small core sections 41A to 41F. By performing the autofocus control, a stable spot size can be kept against the fluctuation (in the Z direction) of the object due to disturbance. As the core used for the autofocus control, the small core section 41B or 41E, which is placed at a center part in the longitudinal direction of the core section 39 of the waveguide unit passing the laser for modification, is desirable" Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to detect different sizes of light beams to focus the objective lens as taught by Ogino in the electronic device of Fang. One would have been motivated to use the size of the beams to focus the lens because Ogino teaches that the detection of the spot size can be used for auto focus control (Ogino paragraphs [0070]). Regarding claim 14, the combination of Fang, Mitsunari and Ogino discloses all the limitations of claim 11 and Fang further discloses, wherein the detection target (camera 302 captures images of the user's eyes col. 11 lines 22-23) is a user's eye (eye 300 fig. 3). Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Fang et al. (US 11009713 B1) in view of Mitsunari (JP 2022170475 A) and Ogino (US 20110140007 A1) as applied to claim 1 and in further view of Zavracky et al. {US 20010045927 Al). Regarding claim 5, the combination of Fang, Mitsunari and Ogino discloses all the limitations of claim 1. Fang does not disclose, wherein a diagonal of a display portion of the display device is shorter than a diameter of the lens. However Zavracky discloses in at least fig. 3A, wherein a diagonal of a display portion of the display device (display diagonal 0.25 inch= 6.10mm paragraph [0063]) is shorter than (6.10mm is shorter than 30.4 mm paragraph [0063]) a diameter of the lens (lens diameter 30.4 mm paragraph [0063]). Zavracky further teaches (paragraphs [0061]-[0062]): "These small high resolution displays require magnification such that when held in a user's hand within the range of0.5 inches to 10 inches of the user's eye, a clear image is provided. A lens 80 suitable for magnifying the image of a micro dis play for viewing by a user is illustrated in the example of FIG. 3A." Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to use a lens as larger than the display as taught by Zavracky in the electronic device of Fang. One would have been motivated to use a larger lens because Zavracky teaches that the larger lens is used to provide a clear image with a close dis play to the users eye (Zavracky paragraphs [0061]- [0062]). Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Fang et al. (US 11009713 B1) in view of Mitsunari (JP 2022170475 A) and Ogino {US 20110140007 A1) as applied to claim 1 and in further view of Yamazaki {US 20220392982 A1). Regarding claim 6, the combination of Fang, Mitsunari and Ogino discloses all the limitations of claim 1. Fang does not disclose, wherein a pixel density of the display device is higher than or equal to 1,000 ppi and lower than or equal to 20,000 ppi. However Yamazaki discloses in at least Embodiment 1, wherein a pixel density of the display device is higher than or equal to 1,000 ppi and lower than or equal to 20,000 ppi (the pixel density of the display panel is preferably 2,000 ppi or higher and 6,000 ppi or lower). Yamazaki further teaches (paragraphs [0072]-[0073]): "The display panel preferably has a higher resolution. The resolution of the display panel can be 500 ppi or higher, preferably 800 ppi or higher, further preferably 1000 ppi or higher, still further preferably 2000 ppi or higher, and yet further preferably 3000 ppi or higher, and 10000 ppi or lower, 8000 ppi or lower, or 6000 ppi or lower, for example. As the resolution increases, the sense of immersion can be enhanced. The display panel preferably has a higher definition. For example, the definition of the display panel is preferably as extremely high as HD (1280x720 effective pixels), FHD (1920x1080 effective pixels), WQHD (2560x1440 effective pixels), WOXGA (2560x1600 effective pixels), 4K (3840x2160 effective pixels), or 8K (7680x4320 effective pixels), and preferably 4K2K, 8K4K, or higher, in particular." Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to use a display with a pixel density as taught by Yamazaki in the electronic device of Fang. One would have been motivated to use a pixel density between 2,000 to 6,000 because Yamazaki teaches that the pixel density provides an increased resolution. (Zavracky paragraphs [0061]-[0062]). Claims 7-9 are rejected under 35 U.S.C. 103 as being unpatentable over Fang et al. (US 11009713 B1) in view of Mitsunari (JP 2022170475 A) and Ogino (US 20110140007 Al) as applied to claim 1 and in further view of Ito (US 20180114937 Al). Regarding claim 7, the combination of Fang, Mitsunari and Ogino discloses all the limitations of claim 1. Fang does not disclose, wherein the display device comprises a plurality of the light-emitting devices and a color filter, and wherein the plurality of light-emitting devices each comprise an organic layer emitting white light. However Ito discloses in at least figure 1, wherein the display device (display apparatus paragraph [0019]) comprises a plurality of the light-emitting devices (light emitting elements 2 fig. 1) and a color filter (color filters 18R, 18G, and 18B fig. 1), and wherein the plurality of light-emitting devices (light emitting elements 2 fig. 1) each comprise an organic layer (organic layer 15 fig. 1) emitting white light (the organic layer 15 sandwiched between the pixel electrode 13 and the upper electrode 16 includes at least a white light emitting layer paragraph [0020]). Fang discloses the claimed invention except the display emitting white light with an organic layer and color filters. Ito shows that the display emitting white light with an organic layer and color filters is an equivalent structure in the art. Therefore, because these displays were art-recognized equivalents before the effective filing date of the claimed invention, one of ordinary skill in the a rt would have found it obvious to substitute the display emitting white light with an organic layer and color filters instead of another display, and the results thereof would have been predictable. See MPEP §2144.06 and 2143 (l)(B). Regarding claim 8, the combination of Fang, Mitsunari, Ogino and Ito discloses all the limitations of claim 7. Fang does not disclose, wherein the organic layer is divided between two adjacent light-emitting devices. However Ito further discloses, wherein the organic layer (organic layer 15 fig. 1) is divided between (the organic layer 15 is across all of the light emitting elements 2 fig. 1) two adjacent light emitting devices (light emitting elements 2 are adjacent fig. 1). Fang discloses the claimed invention except the display emitting white light with an organic layer and color filters. Ito shows that the display emitting white light with an organic layer and color filters is an equivalent structure in the art. Therefore, because these displays were art-recognized equivalents before the effective filing date of the claimed invention, one of ordinary skill in the a rt would have found it obvious to substitute the display emitting white light with an organic layer and color filters instead of another display, and the results thereof would have been predictable. See MPEP §2144.06 and 2143 (l)(B). Regarding claim 9, the combination of Fang, Mitsunari and Ogino discloses all the limitations of claim 1. Fang does not disclose, wherein the display device (display apparatus paragraph [0019]) comprises a first light-emitting devi and a second light-emitting device, and wherein the first light-emitting device (pixel lR fig. 1) and the second light-emitting device (pixel lG fig. 1) comprise different light-emitting materials (the pixels have the same structure except for the color filters paragraph [0020]). However Ito discloses in figure 1, wherein the display device (display apparatus paragraph [0019]) comprises a first light-emitting devi and a second light-emitting device, and wherein the first light-emitting device (pixel lR fig. 1) and the second light-emitting device (pixel lG fig. 1) comprise different light-emitting materials (the pixels have the same structure except for the color filters paragraph [0020]). Ito further teaches (paragraph [0020]): "the present exemplary embodiment, since the top emission method is used in which light is extracted from an electrode located on the opposite side to the substrate 10, the substrate 10 can be a transparent or non-transparent substrate. The pixel electrode 13 and the upper electrode 16 are provided with wiring (not illustrated) for supplying power thereto to cause emission of light. The organic layer 15 sandwiched between the pixel electrode 13 and the upper electrode 16 includes at least a white light emitting layer (not illustrated). Further, color filters 18R, 18G, and 18B which respectively transmit red light, green light, and blue light are provided on the side of the light emitting elements 2 from which light is extracted, whereby white light emitted from the light emitting elements 2 is extracted outside the display apparatus as red light, green light, and blue light. Thus, a red pixel lR, a green pixel lG, and a blue pixel 1B of the display apparatus according to the present exemplary embodiment have the same structure except for the color filters 18R, 18G, and 18B (for convenience, the color filters 18R, 18G, and 18B will be referred to as "color filter 18" hereinafter). A black matrix 19 in FIG. 1 blocks light between the respective color filters 18." Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to use a display with different pixels as taught by Ito in the electronic device of Fang. One would have been motivated to use different pixels because Ito teaches that the pixels provide different colors. (Ito paragraphs [00020]). Claims 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Fang et al. {US 11009713 Bl) in view of Ogino et al. {US 20110140007 Al). Regarding claim 15, Fang discloses in at least figures 1 and 3, an electronic device (illustrating system 100 fig. 1) comprising: a housing comprising an optical device (display device 101 houses display 102, optics block 104 and camera 302 fig. 3), wherein the optical device (display 102, optics block 104 and camera 302 fig. 3) comprises a display device (display 102 fig. 3) and a lens (optics block 104 includes one or more lenses col. 5 lines 54- 55), and wherein the lens (optics block 104 includes one or more lenses col. 5 lines 54-55) is positioned on a display portion side (light is displayed from display 102 to optics block 104 fig. 3) of the display device (display 102 fig. 3). Fang does not disclose, wherein the housing is configured to detect a spot diameter of light that is emitted from the display device and reflected by a user's eye, and wherein the housing is configured to move the lens to make the spot diameter smaller. However Ogino discloses in at least figure 4, wherein the housing (semiconductor manufacturing device paragraph [0065]) is configured to detect a spot diameter of light (the focus detector 29 generates focus error signa Is 33A to 33F corresponding to the laser spots of the respective small core sections 41A to 41F paragraph [0069]) that is emitted from the display device (the laser light 34-36 is emitted through the waveguide part paragraph [0068]) and reflected by a user's eye (camera 302 captures images of the user's eyes col. 11 lines 22-23), and wherein the housing (semiconductor manufacturing device paragraph [0065]) is configured to move the lens (the objective lens 25 can be moved in the z direction paragraph [0070]) to make the spot diameter smaller (the object lens 25 is moved to focus the laser spot size to be the smallest in focus paragraph [0070]). Regarding claim 16, the combination of Fang and Ogino discloses all the limitations of claim 15 and Fang further discloses, wherein the housing (display device 101 houses display 102, optics block 104 and camera 302 fig. 3) is connected to a mounting fixture (a band goes around the users head col. 10 lines 32-33), and wherein the mounting fixture is configured to fix the housing to a user's head (display device 101 is a head mounted display col. 10 line 30). Claim 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Fang et al. (US 11009713 B1) in view of Mitsunari (JP 2022170475 A) and Ogino {US 20110140007 A1) as applied to claims 1 and 11 above and in further view of Kim (US 20230048195 A1). Regarding claim 17, the combination of Fang, Mitsunari and Ogino discloses all the limitations of claim 1 and Fang further discloses, wherein the detection target (camera 302 captures images of the user's eyes col. 11 lines 22-23) is a user's eye (eye 300 fig. 3), and wherein a distance between (distance d between the crystalline lens and the optics block 104 as shown below in fig. 4A) the lens (optics block 104 includes one or more lenses col. 5 lines 54-55) and a crystalline lens (crystalline lens is part of the eye as shown below in fig. 4A) of the user's eye (eye 300 fig. 4A) is extended (d is extended in fig. 4a) by moving (varifocal actuation block 106 is configured to move optics block 104 col. 6 lines 1-2) the lens (optics block 104 includes one or more lenses col. 5 lines 54-55) toward (the optics block 104 is closer to the display 102 in fig. 4A) the display device (display 102 fig. 4A). PNG media_image1.png 658 636 media_image1.png Greyscale Fang does not disclose, so that a position where the first light passing through the crystalline lens of the user's eye forms an image is moved to a retina side of the user's eye. However Kim discloses in at least 28A-C, so that a position (position as shown below in fig. 28A) where the first light (light as shown below in fig. 28A) passing through (the light passes through the crystalline lens as shown below in fig. 28A) the crystalline lens (crystalline lens as shown below in fig. 28A) of the user's eye (eye as shown below in fig. 28A) forms an image (an infinite distance object may be properly focused on a retina in the case of the normal vision paragraph [0274]) is moved to a retina side (the position is moved to the retina when the nearsightedness is corrected by changing the focal distance of the eye paragraph [0275]) of the user's eye (eye as shown below in fig. 28A). PNG media_image2.png 764 510 media_image2.png Greyscale Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to correct the focal position of the image to the retina side of the eye as taught by Kim using the distance between the display and optics block of Fang. Light entering an eye lens can be properly focused on a retina with the same principle of correction glasses for a near-sighted eye described above so that the near-sighted user can view the infinite distance virtual image properly (paragraph [0280]). Regarding claim 18, the combination of Fang, Mitsunari and Ogino discloses all the limitations of claim 1 and Fang further discloses, wherein the detection target (camera 302 captures images of the user's eyes col. 11 lines 22-23) is a user's eye (eye 300 fig. 3), and wherein a distance between (distance d between the crystalline lens and the optics block 104 as shown below in fig. 4A) the lens (optics block 104 includes one or more lenses col. 5 lines 54-55) and a crystalline lens (crystalline lens is part of the eye as shown below in fig. 4A) of the user's eye (eye 300 fig. 4A) is extended (d is extended in fig. 4a) by moving (varifocal actuation block 106 is configured to move optics block 104 col. 6 lines 1-2) the lens (optics block 104 includes one or more lenses col. 5 lines 54-55) toward (the optics block 104 is closer to the display 102 in fig. 4A) the display device (display 102 fig. 4A). PNG media_image1.png 658 636 media_image1.png Greyscale Fang does not disclose, so that a position where the first light passing through the crystalline lens of the user's eye forms an image is moved to a retina side of the user's eye. However Kim discloses in at least 28A-C, so that a position (position as shown below in fig. 28A) where the first light (light as shown below in fig. 28A) passing through (the light passes through the crystalline lens as shown below in fig. 28A) the crystalline lens (crystalline lens as shown below in fig. 28A) of the user's eye (eye as shown below in fig. 28A) forms an image (an infinite distance object may be properly focused on a retina in the case of the normal vision paragraph [0274]) is moved to a retina side (the position is moved to the retina when the nearsightedness is corrected by changing the focal distance of the eye paragraph [0275]) of the user's eye (eye as shown below in fig. 28A). PNG media_image2.png 764 510 media_image2.png Greyscale Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to correct the focal position of the image to the retina side of the eye as taught by Kim using the distance between the display and optics block of Fang. Light entering an eye lens can be properly focused on a retina with the same principle of correction glasses for a near-sighted eye described above so that the near-sighted user can view the infinite distance virtual image properly (paragraph [0280]). Regarding claim 19, the combination of Fang, Mitsunari, Ogino and Kim discloses all the limitations of claim 17, and fang further discloses wherein a distance between (distance d between the crystalline lens and the optics block 104 as shown below in fig. 4A) the lens (optics block 104 includes one or more lenses col. 5 lines 54-55) and a crystalline lens (crystalline lens is part of the eye as shown below in fig. 4A) of the user's eye (eye 300 fig. 4A) is extended (d is extended in fig. 4a). PNG media_image3.png 658 636 media_image3.png Greyscale Fang does not disclose, to counter a myopic state. However Kim further discloses, to counter a myopic state (the position is moved to the retina when the nearsightedness is corrected by changing the focal distance of the eye paragraph [0275]). Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to correct the focal position of the image to the retina side of the eye as taught by Kim using the distance between the display and optics block of Fang. Light entering an eye lens can be properly focused on a retina with the same principle of correction glasses for a near-sighted eye described above so that the near-sighted user can view the infinite distance virtual image properly (paragraph [0280]). Regarding claim 20, the combination of Fang, Mitsunari and Ogino discloses all the limitations of claim 1 and Fang further discloses, wherein the detection target (camera 302 captures images of the user's eyes col. 11 lines 22-23) is a user's eye (eye 300 fig. 3), and wherein a distance between (distance d between the crystalline lens and the optics block 104 as shown below in fig. 4b) the lens (optics block 104 includes one or more lenses col. 5 lines 54-55) and a crystalline lens (crystalline lens is part of the eye as shown below in fig. 4b) of the user's eye (eye 300 fig. 4A) is shortened (d is shortened in fig. 4b) by moving (varifocal actuation block 106 is configured to move optics block 104 col. 6 lines 1-2) the lens (optics block 104 includes one or more lenses col. 5 lines 54-55) toward (the optics block 104 is closer to the crystalline lens in fig. 4b) the crystalline lens (crystalline lens is part of the eye as shown below in fig. 4b). PNG media_image4.png 658 636 media_image4.png Greyscale Fang does not disclose, so that a position where the first light passing through the crystalline lens of the user's eye forms an image is moved to a retina side of the user's eye. However Kim discloses in at least 28A-C, so that a position (position as shown below in fig. 28C) where the first light (light as shown below in fig. 28C) passing through (the light passes through the crystalline lens as shown below in fig. 28C) the crystalline lens (crystalline lens as shown below in fig. 28C) of the user's eye (eye as shown below in fig. 28C) forms an image (in the case of farsightedness, an image is formed in rear of a retina paragraph [0275]) is moved to a retina side (the position is moved to the retina when the farsightedness is corrected by changing the focal distance of the eye paragraph [0275]) of the user's eye (eye as shown below in fig. 28C). PNG media_image5.png 764 497 media_image5.png Greyscale Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to correct the focal position of the image to the retina side of the eye as taught by Kim using the distance between the display and optics block of Fang. Light entering an eye lens to be properly focused on a retina as the same principle of correction glasses for a far-sighted eye described above so that the far-sighted user can view the infinite distance virtual image properly (paragraph [0281]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Itoh (US 2009031615 A1) discloses an image display apparatus with a mirror to adjust spot size. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW R WRIGHT whose telephone number is (703)756-5822. The examiner can normally be reached Mon-Thurs 7:30-5 Friday 8-12. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pinping Sun can be reached at 1-571-270-1284. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW R WRIGHT/Examiner, Art Unit 2872 /PINPING SUN/Supervisory Patent Examiner, Art Unit 2872
Read full office action

Prosecution Timeline

Jan 10, 2023
Application Filed
Sep 22, 2025
Non-Final Rejection — §103
Dec 16, 2025
Response Filed
Apr 03, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601858
LIGHT CONTROL FILM
2y 5m to grant Granted Apr 14, 2026
Patent 12585165
OPTICAL ELEMENT DRIVING MECHANISM
2y 5m to grant Granted Mar 24, 2026
Patent 12566492
OCULAR ANOMALY DETECTION VIA CONCURRENT PRESENTATION OF STIMULI TO BOTH EYES
2y 5m to grant Granted Mar 03, 2026
Patent 12474553
Zoom Lens, Camera Module, and Mobile Terminal
2y 5m to grant Granted Nov 18, 2025
Patent 12429664
CAMERA MODULE
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
55%
Grant Probability
99%
With Interview (+50.0%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 20 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month