Prosecution Insights
Last updated: April 19, 2026
Application No. 17/785,068

MULTILENS DIRECT VIEW NEAR EYE DISPLAY

Final Rejection §102§103
Filed
Jun 14, 2022
Examiner
DUONG, HENRY ABRAHAM
Art Unit
2872
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Softoptics Reality Plus Ltd.
OA Round
2 (Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
86%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
357 granted / 452 resolved
+11.0% vs TC avg
Moderate +6% lift
Without
With
+6.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
21 currently pending
Career history
473
Total Applications
across all art units

Statute-Specific Performance

§101
2.3%
-37.7% vs TC avg
§103
58.3%
+18.3% vs TC avg
§102
24.5%
-15.5% vs TC avg
§112
11.2%
-28.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 452 resolved cases

Office Action

§102 §103
DETAILED ACTION In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendments on 06/26/25 have been entered. Response to Arguments Applicant’s arguments with respect to claims 1-24 have been considered but are moot because the new ground of rejection, as necessitated by claim amendment. Applicant states on page 9, “Cheng doesn’t explicitly teach or suggest multiple “lens portions” (a prism isn’t a portion of a lens), or that such lens portions are “cut from a donor lens having a short EFL”, or that the lens portions are “glued together in a stacked arrangement” to form a single compound lens. The examiner respectfully disagrees. Shown in fig. 12, there is a plurality of lens portions which is shown as prism 101 and in figure 12 the other part is a lens and stated in paragraph [0003] the purpose of the optical system is for a short focal length and a micro-display. In paragraph [0056] states the portions can be adhered, glued, bonded, and welded or integrating the prisms by injection molding. Therefore, it meets the limitation of claim 16. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 5, 17, and 18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Massof, Robert W., et al. “37.1: invited paper: Full‐field high‐resolution binocular HMD.” SID Symposium Digest of Technical Papers, vol. 34, no. 1, May 2003, pp. 1145–1147, https://doi.org/10.1889/1.1832490. Regarding claim 1, Massof teaches a system (system shown in images on page 1146) comprising, a plurality of stacked optical channels (abstract, sixteen miniature flat panel emissive SVGA color displays with spherical faceted lens array), each optical channel comprising at least a portion of a lens (abstract, a lens in the spherical faceted lens array) and at least a portion of a display (abstract, a display in sixteen miniature flat panel emissive SVGA color displays), at least two optical channels handling overlapped portions of a phase space of said system (p., right side, last paragraph, “The optical path is dictated by the size of the miniature displays and the amount of optical overlap between the images of adjacent displays); and a channel image adapter (p. 1146, under title 3. Image Generation, “Each display is driven by its own computer, which contains a three dimensional “world view” in active memory. The computer are off-the-shelf IBM-type PCs with a single 1. GHz Athlon processor and 1 Gbyte of memory…The master computer informs each computer driving a display (a “slave” computer) what part of visual space it should be displaying) to divide an input image into image segments for overlapped projection from said displays (p. 1146, first paragraph, “The large pupil size relative to the other optical apertures in the system requires that images from adjacent displays have significant overlap.”), one image segment associated with each optical channel (p. 1146, 3. Image Generation, “Each display is driven by its own computer, which contains a three dimensional “world view” in active memory. The computers are off-the-shelf IBM-type PCs with a single 1.6 GHz Athlon processor and 1 Gbyte of memory. The graphics adapter is an nVidia GeForce3, which is capable of drawing 100,000 vertices per sec. (These characteristics improve with each hardware manufacturing iteration, often with decreased cost.) Head position is measured by a tracking system (Intersense IS-600), which sends 6 degrees of freedom position data to a “master” computer. The master computer informs each computer driving a display (a “slave” computer) what part of visual space it should be displaying. The graphics components of the system therefore require 33 (16+16+1) computers. Note from the reference: Each display in the binocular HMD is driven by its own computer, which generates a unique image directed by a master computer to ensure precise alignment and coverage of the visual field.), said input image comprising data pixels each having a pixel display angle (p. 1145, 1. Introduction, “For a typical 19” UXGA display (1600x1200) viewed from a distance of 18 inches, each pixel has an angular size of 2 minutes of arc”… and other examples given with specifically displays and this corresponding angular resolution per pixel, Note from the reference: Each pixel in the input image corresponds to a specific angular size on the display, defining the pixel’s visual display angle which determines the angular resolution of the system), said channel image adapter to place a copy of each said data pixel into those of said image segments associated with those of said optical channels whose phase space includes said pixel display angle of said data pixel (p. 1146, 3. Image Generation, “Each display is driven by its own computer, which contains a three dimensional “world view” in active memory. The computers are off-the-shelf IBM-type PCs with a single 1.6 GHz Athlon processor and 1 Gbyte of memory. The graphics adapter is an nVidia GeForce3, which is capable of drawing 100,000 vertices per sec. (These characteristics improve with each hardware manufacturing iteration, often with decreased cost.) Head position is measured by a tracking system (Intersense IS-600), which sends 6 degrees of freedom position data to a “master” computer. The master computer informs each computer driving a display (a “slave” computer) what part of visual space it should be displaying. Note from the reference: the channel image adapter (software/calculation) places a copy of each data pixel into the corresponding image segments for each optical channel (display), based on spatial orientation (pixel display angles) so images align correctly in the combined visual field.). Regarding claim 5, Massof teaches the system (system shown in images on page 1146) of claim 1 and also comprising, a plurality of channel correctors , one per optical channel, each to provide compensation to its associated said image segment to correct imaging errors of its associated lens and to display its corrected image segment on its associated display (p. 1146, 3. Image Generation, “Images on the screens need to be aligned to a pixel level of precision: This is done by software in a calibration process that provides each slave with rotation and extension parameters that it then applies to the images it displays,” note from the reference: a channel corrector is the software calibration process that applies rotation and scaling adjustments to each display’s image to ensure pixel-level alignment and accurate visual integration across multiple optical channels in the binocular HMD system”). Regarding claim 17, Massof teaches a method (system shown in images on page 1146) comprising, stacking optical channels (abstract, sixteen miniature flat panel emissive SVGA color displays with spherical faceted lens array), each optical channel comprising at least a portion of a lens (abstract, a lens in the spherical faceted lens array) and at least a portion of a display (abstract, a display in sixteen miniature flat panel emissive SVGA color displays), at least two optical channels handling overlapped portions of a phase space of an optical device (p., right side, last paragraph, “The optical path is dictated by the size of the miniature displays and the amount of optical overlap between the images of adjacent displays); and a channel image adapter (p. 1146, under title 3. Image Generation, “Each display is driven by its own computer, which contains a three dimensional “world view” in active memory. The computer are off-the-shelf IBM-type PCs with a single 1. GHz Athlon processor and 1 Gbyte of memory…The master computer informs each computer driving a display (a “slave” computer) what part of visual space it should be displaying); and dividing an input image into image segments for overlapped projection from said display (p. 1146, first paragraph, “The large pupil size relative to the other optical apertures in the system requires that images from adjacent displays have significant overlap.”), one image segment associated with each optical channel (p. 1146, 3. Image Generation, “Each display is driven by its own computer, which contains a three dimensional “world view” in active memory. The computers are off-the-shelf IBM-type PCs with a single 1.6 GHz Athlon processor and 1 Gbyte of memory. The graphics adapter is an nVidia GeForce3, which is capable of drawing 100,000 vertices per sec. (These characteristics improve with each hardware manufacturing iteration, often with decreased cost.) Head position is measured by a tracking system (Intersense IS-600), which sends 6 degrees of freedom position data to a “master” computer. The master computer informs each computer driving a display (a “slave” computer) what part of visual space it should be displaying. The graphics components of the system therefore require 33 (16+16+1) computers. Note from the reference: Each display in the binocular HMD is driven by its own computer, which generates a unique image directed by a master computer to ensure precise alignment and coverage of the visual field.), said input image comprising data pixels each having a pixel display angle (p. 1145, 1. Introduction, “For a typical 19” UXGA display (1600x1200) viewed from a distance of 18 inches, each pixel has an angular size of 2 minutes of arc”… and other examples given with specifically displays and this corresponding angular resolution per pixel, Note from the reference: Each pixel in the input image corresponds to a specific angular size on the display, defining the pixel’s visual display angle which determines the angular resolution of the system), said dividing comprising, placing a copy of each said data pixel into those of said image segments associated with those of said optical channels whose phase space includes said pixel display angle of said data pixel (p. 1146, 3. Image Generation, “Each display is driven by its own computer, which contains a three dimensional “world view” in active memory. The computers are off-the-shelf IBM-type PCs with a single 1.6 GHz Athlon processor and 1 Gbyte of memory. The graphics adapter is an nVidia GeForce3, which is capable of drawing 100,000 vertices per sec. (These characteristics improve with each hardware manufacturing iteration, often with decreased cost.) Head position is measured by a tracking system (Intersense IS-600), which sends 6 degrees of freedom position data to a “master” computer. The master computer informs each computer driving a display (a “slave” computer) what part of visual space it should be displaying. Note from the reference: the channel image adapter (software/calculation) places a copy of each data pixel into the corresponding image segments for each optical channel (display), based on spatial orientation (pixel display angles) so images align correctly in the combined visual field.). Regarding claim 18, Massof teaches the method of claim 17 and also comprising, providing per-optical-channel compensation to each associated said image segment to correct imaging errors of its associated lens, thereby to produce a per-channel corrected image segment; and displaying each per-channel corrected image segment on its associated said display (p. 1146, 3. Image Generation, “Each display is driven by its own computer, which contains a three dimensional “world view” in active memory. The computers are off-the-shelf IBM-type PCs with a single 1.6 GHz Athlon processor and 1 Gbyte of memory. The graphics adapter is an nVidia GeForce3, which is capable of drawing 100,000 vertices per sec. (These characteristics improve with each hardware manufacturing iteration, often with decreased cost.) Head position is measured by a tracking system (Intersense IS-600), which sends 6 degrees of freedom position data to a “master” computer. The master computer informs each computer driving a display (a “slave” computer) what part of visual space it should be displaying. Note from the reference: the channel image adapter (software/calculation) places a copy of each data pixel into the corresponding image segments for each optical channel (display), based on spatial orientation (pixel display angles) so images align correctly in the combined visual field.). Claim 16 is rejected under 35 U.S.C. 102(a)(1) as being anticipated by Cheng et al. (US 20130187836). Regarding claim 16, Cheng teaches a compound lens comprising a plurality of lens portions (prism 101 and lens), each portion cut from a donor lens having a short EFL (¶3, the purpose of the optical system is a short focal length and a micro-display), said lens portions glued together in a stacked arrangement (¶56, adhering, gluing, bonding, and welding or integrating the prisms by injection molding). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Pijlman et al. (US 20160349524) in view of Massof, Robert W., et al. “37.1: invited paper: Full‐field high‐resolution binocular HMD.” SID Symposium Digest of Technical Papers, vol. 34, no. 1, May 2003, pp. 1145–1147, https://doi.org/10.1889/1.1832490. Regarding claim 2, Pijlman teaches a near eye display system (¶2, ¶58, ¶60, multi-view display device) comprising, an optical system (¶109, ¶165, optical system), a processor (¶109, ¶160, processing image data) and a housing (note: this would be inherent because to protect the component from the external environment) on which said optical system and processor are mounted close to a pair of human eyes (¶4, ¶7, ¶58, ¶60), said optical system comprising, per eye (shown in fig. 2 and 9 are viewed by both eyes) a plurality of stacked optical channels (¶60, the autostereoscopic display device 1, shown in fig. 1, in particular, each lenticular element 11 overlies a small group of display pixels 5 in each row, where, in the current example, a row extends perpendicular to the elongate axis of the lenticular element 11. The lenticular element 11 projects the output of each display pixel 5 of a group in different direction, so as to form the several different views), each optical channel comprising at least a portion of a lens (lenticular elements 11) and at least a portion of a display (¶58, the lenticular elements 11 are in the form of convex cylindrical lenses each having an elongate axis 12 extending perpendicular to the cylindrical curvature of the element, and each element acts as a light output directing means to provides different images, or views, from the display panel 3 to the eyes of a user positioned in front of the display device 1), at least two optical channels handling overlapped portions of a phase space of said system (fig. 2 and 9). Pijlman does not specifically teach said processor comprising, a channel image adapter to divide an input image into image segments, one image segment associated with each optical channel, said input image comprising data pixels each having a pixel display angle, said channel image adapter to place a copy of each said data pixel into those of said image segments associated with hose of said optical channels whose phase space includes said pixel display angle of said data pixel; and a plurality of channel correctors, one per optical channel, each to provide compensation of its associated said image segment to correct imaging errors of its associated lens and to display its corrected image segment on its associated display. However, in a similar field of endeavor, Massof teaches a near eye display system (system shown in images on page 1146) comprising, said processor p. 1146, under title 3. Image Generation, “Each display is driven by its own computer, which contains a three dimensional “world view” in active memory. The computer are off-the-shelf IBM-type PCs with a single 1. GHz Athlon processor and 1 Gbyte of memory) comprising, a channel image adapter (p. 1146, under title 3. Image Generation, “Each display is driven by its own computer, which contains a three dimensional “world view” in active memory. The computer are off-the-shelf IBM-type PCs with a single 1. GHz Athlon processor and 1 Gbyte of memory…The master computer informs each computer driving a display (a “slave” computer) what part of visual space it should be displaying) to divide an input image into image segments, one image segment associated with each optical channel (p. 1146, 3. Image Generation, “Each display is driven by its own computer, which contains a three dimensional “world view” in active memory. The computers are off-the-shelf IBM-type PCs with a single 1.6 GHz Athlon processor and 1 Gbyte of memory. The graphics adapter is an nVidia GeForce3, which is capable of drawing 100,000 vertices per sec. (These characteristics improve with each hardware manufacturing iteration, often with decreased cost.) Head position is measured by a tracking system (Intersense IS-600), which sends 6 degrees of freedom position data to a “master” computer. The master computer informs each computer driving a display (a “slave” computer) what part of visual space it should be displaying. The graphics components of the system therefore require 33 (16+16+1) computers. Note from the reference: Each display in the binocular HMD is driven by its own computer, which generates a unique image directed by a master computer to ensure precise alignment and coverage of the visual field.), said input image comprising data pixels each having a pixel display angle (p. 1145, 1. Introduction, “For a typical 19” UXGA display (1600x1200) viewed from a distance of 18 inches, each pixel has an angular size of 2 minutes of arc”… and other examples given with specifically displays and this corresponding angular resolution per pixel, Note from the reference: Each pixel in the input image corresponds to a specific angular size on the display, defining the pixel’s visual display angle which determines the angular resolution of the system), said channel image adapter to place a copy of each said data pixel into those of said image segments associated with hose of said optical channels whose phase space includes said pixel display angle of said data pixel (p. 1146, 3. Image Generation, “Each display is driven by its own computer, which contains a three dimensional “world view” in active memory. The computers are off-the-shelf IBM-type PCs with a single 1.6 GHz Athlon processor and 1 Gbyte of memory. The graphics adapter is an nVidia GeForce3, which is capable of drawing 100,000 vertices per sec. (These characteristics improve with each hardware manufacturing iteration, often with decreased cost.) Head position is measured by a tracking system (Intersense IS-600), which sends 6 degrees of freedom position data to a “master” computer. The master computer informs each computer driving a display (a “slave” computer) what part of visual space it should be displaying. Note from the reference: the channel image adapter (software/calculation) places a copy of each data pixel into the corresponding image segments for each optical channel (display), based on spatial orientation (pixel display angles) so images align correctly in the combined visual field.); and a plurality of channel correctors, one per optical channel, each to provide compensation of its associated said image segment to correct imaging errors of its associated lens and to display its corrected image segment on its associated display (p. 1146, 3. Image Generation, “Images on the screens need to be aligned to a pixel level of precision: This is done by software in a calibration process that provides each slave with rotation and extension parameters that it then applies to the images it displays,” note from the reference: a channel corrector is the software calibration process that applies rotation and scaling adjustments to each display’s image to ensure pixel-level alignment and accurate visual integration across multiple optical channels in the binocular HMD system”). It would have been obvious to one of ordinary skill in the art before the effective filing date to provide system of Pijlman with a channel image adapter to divide an input image into image segments, one image segment associated with each optical channel, said input image comprising data pixels each having a pixel display angle, said channel image adapter to place a copy of each said data pixel into those of said image segments associated with hose of said optical channels whose phase space includes said pixel display angle of said data pixel; and a plurality of channel correctors, one per optical channel, each to provide compensation of its associated said image segment to correct imaging errors of its associated lens and to display its corrected image segment on its associated display of Massof, for the purpose of achieving a full field of view with high binocular overlap and high angular resolution with a graphics system that will drive the display at high level of detail with high frame rates (p.1145, left column). Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Cheng et al. (US 20130187836) in view of Massof, Robert W., et al. “37.1: invited paper: Full‐field high‐resolution binocular HMD.” SID Symposium Digest of Technical Papers, vol. 34, no. 1, May 2003, pp. 1145–1147, https://doi.org/10.1889/1.1832490. Regarding claim 3, Cheng teaches a near eye display system (¶2, head-mounted display device) comprising, per eye (¶4, single micro-display for each eye), a compound lens (auxiliary lens and prism (lens like functions)) formed of multiple lens portions of short effective focal length (EFL) lenses (¶32, ¶76-¶77; ¶3, short focal length); a display unit comprising multiple displays (¶10, display component including a plurality of micro-displays), one per lens portion (¶77, head-mounted display device can comprise display channels each comprising, a prism with free-form surface 1302, a micro-display device 1302 and a lens with free-form surfaces 1304); and said compound lens, display unit and said image adapted operating to provide a field of view of over 60 degrees and an eyebox at least covering the range of pupil motion of said eye (¶3, ¶12, ¶33, ¶79; ¶23, horizontal field of view of at least 70 degree). Massof does not specifically teach an image adapter to divide an input image into overlapped image segments, one per display. However, in a similar field of endeavor, Massof teaches a system (system shown in images on page 1146) comprising, an image adapter (p. 1146, under title 3. Image Generation, “Each display is driven by its own computer, which contains a three dimensional “world view” in active memory. The computer are off-the-shelf IBM-type PCs with a single 1. GHz Athlon processor and 1 Gbyte of memory…The master computer informs each computer driving a display (a “slave” computer) what part of visual space it should be displaying) to divide an input image into overlapped image segments (p. 1146, first paragraph, “The large pupil size relative to the other optical apertures in the system requires that images from adjacent displays have significant overlap.”), one per display (p. 1145, 1. Introduction, “For a typical 19” UXGA display (1600x1200) viewed from a distance of 18 inches, each pixel has an angular size of 2 minutes of arc”… and other examples given with specifically displays and this corresponding angular resolution per pixel, Note from the reference: Each pixel in the input image corresponds to a specific angular size on the display, defining the pixel’s visual display angle which determines the angular resolution of the system). It would have been obvious to one of ordinary skill in the art before the effective filing date to provide system of Cheng with an image adapter to divide an input image into overlapped image segments, one per display of Massof, for the purpose of achieving a full field of view with high binocular overlap and high angular resolution with a graphics system that will drive the display at high level of detail with high frame rates (p.1145, left column). Claims 4, 6-15, and 19-24 are rejected under 35 U.S.C. 103 as being unpatentable over Massof, Robert W., et al. “37.1: invited paper: Full‐field high‐resolution binocular HMD.” SID Symposium Digest of Technical Papers, vol. 34, no. 1, May 2003, pp. 1145–1147, https://doi.org/10.1889/1.1832490 as applied to claim 1 above, and further in view of Cheng et al. (US 20130187836). Regarding claim 4, Massof teaches the invention as set forth above but does not specifically teach a housing useful for virtual reality or augmented reality. However, in a similar field of endeavor, Cheng teaches the system comprising a housing (note: this would be inherent because to protect the component from the external environment) useful for virtual reality or augmented reality (¶32, augmented reality). It would have been obvious to one of ordinary skill in the art before the effective filing date to provide the system of Massof with a housing useful for virtual reality or augmented reality of Cheng, for the purpose of having a head-mounted display system (¶32). Regarding claim 6, Massof teaches the invention as set forth above but does not specifically having optical axes which are tilted with respect to each other. However, in a similar field of endeavor, Cheng teaches the system (fig. 7a) and having optical axes which are tilted with respect to each other (shown in fig. 7, the optical axis is tiled with respect to each other; note: the optical axis is an imaginary straight line that passes through the centers of curvature of all optical surfaces in a lens or optical system, representing the axis of symmetry for light propagation). It would have been obvious to one of ordinary skill in the art before the effective filing date to provide the system of Massof with having optical axes which are tilted with respect to each other of Cheng, for the purpose of having a head-mounted display system (¶32). Regarding claim 7, Massof teaches the invention as set forth above but does not specifically teach having at least one said display which is off-center with respect to an optical axis of its said lens or lens portion. However, in a similar field of endeavor, Cheng teaches the system (fig. 2a) and having at least one said display which is off-center (micro-display 6 is off axis from the prism 101 and lens) with respect to an optical axis of its said lens or lens portion (101 and lens). It would have been obvious to one of ordinary skill in the art before the effective filing date to provide the system of Massof with having at least one said display which is off-center with respect to an optical axis of its said lens or lens portion of Cheng, for the purpose of having a head-mounted display system (¶32). Regarding claim 8, Massof teaches the invention as set forth above but does not specifically teach at least one said lens or lens portion is cut from a donor lens. However, in a similar field of endeavor, Cheng teaches the system wherein at least one said lens or lens portion is cut from a donor lens (¶32, ¶76-¶77, the tilted head-mounted display system can further comprise an auxiliary lens with free-form surfaces. Each lens cooperates with the corresponding prism with free-form surfaces, so that the user is able to see external scenery for augmented reality application. The second optical surface of the prism is a semi-transmissive and semi-reflective mirror surface.). It would have been obvious to one of ordinary skill in the art before the effective filing date to provide the system of Massof with at least one said lens or lens portion is cut from a donor lens of Cheng, for the purpose of having a head-mounted display system (¶32). Regarding claim 9, Massof in view of Cheng teaches the invention a set forth above and Cheng further teaches said cut is asymmetric about an optical axis of its said donor lens (fig. 1-12). Motivation to combine is the same as in claim 1. Regarding claim 10, Massof teaches the invention as set forth above but does not specifically teach comprising optical separators between neighboring channels, neighboring lenses or neighboring lens portions. However, in a similar field of endeavor, Cheng teaches the system (fig. 6) and also comprising optical separators (fig. 6, separate by air, can be considered and “air lens” as separator) between neighboring channels, neighboring lenses or neighboring lens portions (lenses shown in fig. 6). It would have been obvious to one of ordinary skill in the art before the effective filing date to provide the system of Massof with comprising optical separators between neighboring channels, neighboring lenses or neighboring lens portions of Cheng, for the purpose of having a head-mounted display system (¶32). Regarding claim 11, Massof teaches the invention as set forth above but does not specifically teach said imaging errors comprise at least one of color aberration and image distortion. However, in a similar field of endeavor, Chang teaches the system wherein said imaging errors comprise at least one of color aberration and image distortion (¶5-¶6, ¶33, ¶79, the system needs an distortion correction on each display channel, therefore the imaging errors comprise of image distortion). It would have been obvious to one of ordinary skill in the art before the effective filing date to provide the system of Massof with said imaging errors comprise at least one of color aberration and image distortion of Cheng, for the purpose of having a head-mounted display system (¶32). Regarding claim 12, Massof teaches the invention as set forth above but does not specifically teach said lenses from said optical channels are formed into a compound lens. However, in a similar field of endeavor, Cheng teaches the system wherein said lenses from said optical channels (fig. 6; ¶65, channel 601 and 602) are formed into a compound lens (¶32, ¶76-¶77, prism and lens). It would have been obvious to one of ordinary skill in the art before the effective filing date to provide the system of Massof with said lenses from said optical channels are formed into a compound lens of Cheng, for the purpose of having a head-mounted display system (¶32). Regarding claim 13, Massof teaches the invention as set forth above but does not specifically teach said displays from said optical channels are formed into a single display. However, in a similar field of endeavor, Cheng teaches the system wherein said displays from said optical channels (fig. 2, channel) are formed into a single display (fig. 2a/b). It would have been obvious to one of ordinary skill in the art before the effective filing date to provide the system of Massof with said displays from said optical channels are formed into a single display of Cheng, for the purpose of having a head-mounted display system (¶32). Regarding claim 14, Massof teaches the invention as set forth above but does not specifically teach wherein said displays from said optical channels are separated from each other by empty display areas. However, in a similar field of endeavor, Cheng teaches the system wherein said displays from said optical channels (fig. 6; ¶65, channel 601 and 602) are separated from each other by empty display areas (fig. 6, separate by air, can be considered and “air lens” as separator). It would have been obvious to one of ordinary skill in the art before the effective filing date to provide the system of Massof with wherein said displays from said optical channels are separated from each other by empty display areas of Cheng, for the purpose of having a head-mounted display system (¶32). Regarding claim 15, Massof teaches the invention as set forth above but does not specifically teach each optical channel has an eye-display distance of no more than 30mm. However, in a similar field of endeavor, Cheng teaches the system, wherein each optical channel has an eye-display distance of no more than 30mm (claim 29, 15mm). It would have been obvious to one of ordinary skill in the art before the effective filing date to provide the system of Massof with each optical channel has an eye-display distance of no more than 30mm of Cheng, for the purpose of having a head-mounted display system (¶32). Regarding claim 19, Massof teaches the invention as set forth above but does not specifically teach comprising tilting optical axes of said optical channels have with respect to each other. However, in a similar field of endeavor, Cheng teaches the method (fig. 7a) and also comprising tilting optical axes of said optical channels have with respect to each other (shown in fig. 7, the optical axis is tiled with respect to each other; note: the optical axis is an imaginary straight line that passes through the centers of curvature of all optical surfaces in a lens or optical system, representing the axis of symmetry for light propagation). It would have been obvious to one of ordinary skill in the art before the effective filing date to provide the system of Massof with tilting optical axes of said optical channels have with respect to each other of Cheng, for the purpose of having a head-mounted display system (¶32). Regarding claim 20, Massof teaches the invention as set forth above but does not specifically teaches comprising positioning at least one said display off-center with respect to an optical axis of its said lens. However, in a similar field of endeavor, Cheng teaches the method (fig. 2a) and also comprising positioning at least one said display off-center (micro-display 6 is off axis from the prism 101 and lens) with respect to an optical axis of its said lens (101 and lens). It would have been obvious to one of ordinary skill in the art before the effective filing date to provide the system of Massof with positioning at least one said display off-center with respect to an optical axis of its said lens of Cheng, for the purpose of having a head-mounted display system (¶32). Regarding claim 21, Massof teaches the invention as set forth above but does not specifically teach cutting at least one said lens from a donor lens. However, in a similar field of endeavor, Cheng teaches the method and also comprising cutting at least one said lens from a donor lens (¶32, ¶76-¶77, the tilted head-mounted display system can further comprise an auxiliary lens with free-form surfaces. Each lens cooperates with the corresponding prism with free-form surfaces, so that the user is able to see external scenery for augmented reality application. The second optical surface of the prism is a semi-transmissive and semi-reflective mirror surface.). It would have been obvious to one of ordinary skill in the art before the effective filing date to provide the system of Massof with cutting at least one said lens from a donor lens of Cheng, for the purpose of having a head-mounted display system (¶32). Regarding claim 22 Massof in view of Cheng teaches the invention as set forth above and Cheng further teaches said cutting is asymmetric about an optical axis of its said donor lens (fig. 1-12). Regarding claim 23, Massof teaches the invention as set forth above but does not specifically teach comprising placing optical separators between neighboring said optical channels. However, in a similar field of endeavor, Chang teaches the method (fig. 6) and also comprising placing optical separators (fig. 6, separate by air, can be considered and “air lens” as separator) between neighboring said optical channels (lenses shown in fig. 6). It would have been obvious to one of ordinary skill in the art before the effective filing date to provide the system of Massof with placing optical separators between neighboring said optical channels of Cheng, for the purpose of having a head-mounted display system (¶32). Regarding claim 24, Massof teaches the invention as set forth above but does not specifically teach said imaging errors comprise at least one of color aberration and image distortion. However, in a similar field of endeavor, Cheng teaches the method wherein said imaging errors comprise at least one of color aberration and image distortion (¶5-¶6, ¶33, ¶79, the system needs an distortion correction on each display channel, therefore the imaging errors comprise of image distortion). It would have been obvious to one of ordinary skill in the art before the effective filing date to provide the method of Massof with said imaging errors comprise at least one of color aberration and image distortion of Cheng, for the purpose of having a head-mounted display system (¶32). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HENRY DUONG whose telephone number is (571)270-0534. The examiner can normally be reached Monday-Friday from 9:00 AM to 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pinping Sun can be reached at (571)270-1284. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HENRY DUONG/Primary Patent Examiner, Art Unit 2872 10/11/25
Read full office action

Prosecution Timeline

Jun 14, 2022
Application Filed
Jan 25, 2025
Non-Final Rejection — §102, §103
Jun 25, 2025
Response Filed
Oct 14, 2025
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585114
SYSTEMS AND METHODS FOR CALIBRATING AND EVALUATING A WEARABLE HEADS-UP DISPLAY WITH INTEGRATED CORRECTIVE PRESCRIPTION
2y 5m to grant Granted Mar 24, 2026
Patent 12585116
LIGHT-SHIELDING MEMBER AND HEAD-MOUNTED DISPLAY
2y 5m to grant Granted Mar 24, 2026
Patent 12585163
CAMERA DEVICE AND ELECTRONIC APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12585124
Visualization System with Lighting
2y 5m to grant Granted Mar 24, 2026
Patent 12578580
Glasses Augmented Passive Device for Combining Handheld Display with Surrounding Scenery
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
86%
With Interview (+6.5%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 452 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month