Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement submitted on 11/17/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Amendment
This office action is in response to the communication filed 11/17/2025.
Cancellation of claims 10 and 19, filed 11/17/2025, are acknowledged and accepted.
Amendments to the specification and to claims 1, 3, 8, 13-14, and 16, filed 11/17/2025, are acknowledged and accepted.
Newly submitted claim 21, filed 11/17/2025, is acknowledged and accepted.
Due to claim 8’s amendment, the previous objection to claim 8 is now withdrawn.
Response to Arguments
On pgs. 10-12 of the Remarks, filed 11/17/2025, Applicant's arguments with respect to claim 3 have been fully considered but are moot because the Applicant is arguing newly amended claims, filed 11/17/2025, not the Non-Final Rejection, filed 7/17/2025. Newly amended claims are argued below.
Applicant's remaining arguments, filed 11/17/2025, have been fully considered but they are not persuasive for reasons given as follows:
On pg. 8 of the Remarks, Applicant argues that the cited prior art “fails to disclose… a neural network that is trained to analyze non-demosaiced raw image data” as newly amended into claim 13. Examiner disagrees, referring to Knuttson in claim 13’s rejection below. Cited passages directly call out image pre-processing and demosaicing as merely optional procedures – thereby establishing neural networks trained to analyze non-demosaiced raw images.
On pg. 9 of the Remarks, Applicant argues that the cited prior art ‘fails to describe training the NSPs generally or what kind of training data to use for training let alone using “a first training set of first anamorphic images characterized by anamorphic lens artifacts” as set forth in amended claim 1’. Examiner disagrees, referring again to Knuttson in claim 1’s rejection below. Cited passages directly call out image pre-processing as a merely optional procedure – thereby establishing neural networks that are trained to analyze uncompensated images retaining their anamorphic character, and that would have naturally included such uncompensated images in their own training sets.
On pg. 13 of the Remarks and regarding claim 7, Applicant argues that the cited prior art “fails to disclose or suggest using eye position data to determine the gaze path or gaze angle.” Examiner disagrees. As earlier stated in claim 7’s rejection (and as Applicant acknowledged in their Remarks), Niemasik discloses deep learning for detecting the presence and location of faces in an input image using "facial landmarks, such as a position of eyes"; Niemasik also discusses the use of gaze paths/angles. Clearly therefore, eye positions and gaze paths are both present, and it would appear Applicant is specifically concerned that the determination of Niemasik’s, e.g., gaze paths may not directly involve the eye positions themselves.
This would not be a particularly valid perspective or argument, however – certainly not one that is consistent with standard factual considerations and reasoning. Consider that, as a matter of basic definitions (mathematical, colloquial, or otherwise), a “path” is generally defined, at least in-part, by its own starting point and endpoint. It follows, therefore, that there would be no reasonable way to define a gaze path without somehow involving the starting (eye) position. Any argument to the contrary would fail to properly consider how these basic objects relate to one other another – an understanding of which requires only rudimentary knowledge of coordinate geometry or vector arithmetic, as commonly taught at or below the secondary school level, and as would certainly be accessible to one of ordinary skill in the art.
As a courtesy, and for completeness, Examiner further cautions Applicant against similarly improper arguments that would have Niemasik ascertaining gaze path endpoints (which would perhaps more directly correspond to claim 7’s “gaze point”) independently of the starting points (eye position). After all, it would make little sense to suppose that a device may somehow determines where an eye’s gaze ends without even knowing where the eye is in the first place – similar to how it would not make much sense to argue that Niemasik determines an eye’s gaze path/direction/angle without considering the eye’s location. It stands to reason, therefore, that by disclosing eye positions and gaze paths, Niemasik’s sufficiently discloses the claimed gaze point determination from eye positions – or at least some functional equivalent, whose relation can be trivially determined by one with a reasonable command over basic pre-calculus concepts.
On pgs. 14-15 of the Remarks, and regarding limitations of the newly amended claim 8 (including those of the previous claim 10), Applicant argues “the system of Shpunt does not make use of that [pupillary] distance (to enable photosites or otherwise)”. Examiner disagrees –
noting that Applicant, on pg. 14 of the Remarks, has only acknowledged a limited portion of Shpunt’s ¶ 29, which fails to capture much of the pertinent details Examiner relied upon in their rejection.
reiterating that – as noted in the actual rejection of the previous claim 10 – Shpunt discloses a VR system that tracks eye movement – i.e. (pupillary) distance from gaze tracking module 190 – adjusting resolution to be higher in immediate/narrower fields-of-view and lower in more peripheral fields-of-view. (Note from ¶ 29: “Thus, in some embodiments, the system 100 may be configured to capture… higher resolution image data for a narrower field of view centered on a vector corresponding to the user's current line of sight (e.g., gaze direction)”. See also much of the remaining disclosure; e.g. FIGs. 2-5)
Disregarding the incomplete details pressed by Applicant in their Remarks and per item A above, Examiner asserts that proper consideration of the actual rejection of record, per item B above, reveals that Shpunt’s VR system adjusts resolution based on the eye/gaze activity – i.e. based on line of sight, and thus pupil positions (given that line of sight is commonly defined to terminate, at one end, on or near the pupil).
It is therefore unclear why Applicant argues that “Shpunt does not make use” of pupillary distances in the quoted argument above – unless Applicant is under the impression that pupillary distances are somehow disconnected from the pupil positions. Such a notion, if Applicant intended to argue it, would be largely unconvincing and fundamentally flawed, however, as one can easily show that any information related to pupillary distances is functionally equivalent to and accessible from information on the pupil positions. This is, again, a mere consequence of rudimentary mathematics and coordinate geometry.
Regarding Applicant’s additional argument that “the system of Shpunt fails to selectively enable photosites”, Examiner further disagrees – acknowledging Applicant’s reference to camera adjustments in ¶ 79, but also noting that Applicant has overlooked other parts of Shpunt (e.g. ¶s 80~86, FIG. 10) actually pertaining to camera/image sensors (and hence the claimed photosites). These describe the image sensors as capturing images only for a single (or narrow range of) viewing angle(s), and requiring adjustable optical elements (e.g. mirrors) for redirecting light and enabling image capture in different viewing directions. Clearly, some selective enablement of the image sensor’s photosites/portions are going to be inevitable in this simple setup.
Besides the above confirming Shpunt’s disclosure of selectively enabled photosites, Applicant is further advised/reminded that – in the previous rejection of claims 8 and 10 (together corresponding to newly amended claim 8) – Examiner had cited Niemasik’s teachings of selective enabling of photosites for high-/low-resolution imaging in the independent claim 8, before then citing Shpunt who connects high-/low-resolution imaging to gaze tracking (and hence, pupil positions/distances) in the dependent claim 10. Applicant’s arguments with respect to selective photosite enablement are thus found to be further unpersuasive on the additional grounds that they fail to consider Niemasik’s disclosure among the presented evidence.
On pgs. 16-17 and regarding claim 12, Applicant argues “the system of Yu does not identify the retina”. Examiner disagrees, noting that in their citation/summary of Yu’s disclosure, Applicant apparently focuses only on narrow portions of Yu – overemphasizing what are clearly exemplary details involving eye parts other than the claimed retina – and arguing that these details support what would appear to be a prohibitively narrow reading of Yu.
For example, Applicant recites an excerpt of Yu’s ¶ 99 applied in the previous rejection of claim 12, which reads “at least some portions of the light may enter eye 550 through cornea 552 and reach iris 554, pupil 556, lens 558, or retina 560 of eye 550 … Different portions of the eye surface and the interfaces within eye 550 may have different patterns of features. Thus, an intensity pattern of the light reflected by eye 550 may depend on the pattern of features within the illuminated portion of eye 550, which may allow identification of the portions of the eye (e.g., iris 554 or pupil 556)”.
Applicant then takes the above excerpt – which is broadly exhaustive in listing various eye parts, yet also contains references to more exemplary details (e.g., iris, pupil identification), alluding to specific discussion given elsewhere – and presents the exemplary details as evidence that other features (retinal identification) besides those exemplified are somehow unsupported by Yu. This is not a logical conclusion to come to, however, nor is it a proper argument to make.
Under a broader and more appropriate interpretation, one may instead suppose that Yu intentionally discloses all of the quoted features (cornea, iris, pupil, lens, retina, …) in order to provided broader literary support for their identification. The mere fact that Yu then pivots their discussion towards specific exemplary details associated with the iris or pupil does not somehow disqualify these or other remaining parts of the disclosure – nor would it prevent one of ordinary skill from gathering such basic concept of, e.g., retinal identification from the given text.
Examiner will also point out that one can simply turn to ¶ 72 and find Yu indeed discusses identification of retinal structures such as the fovea: “Because the foveal axis is defined according to the fovea, which is located in the back of the eye, the foveal axis may be difficult or impossible to measure directly in some eye-tracking embodiments”. Here, Yu implicitly confirms the existence of alternative embodiments where retinal features can be measured directly; Yu even separates out embodiments in which retinal measurements are “difficult” from those which are “impossible” – providing clear textual support that such measurements can be accomplished in certain embodiments. This stands in contrast to Applicant’s argument – which incorrectly declares retinal identification is unsupported, and is based on little more than improper negative inference that neglects much of the written text in favor of non-exclusive examples.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 13 and 16-18 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Knutsson and Kardash (US 20240323507 A1, hereinafter “Knutsson”).
Regarding claim 13, Knutsson discloses (see ¶s 88-91, FIGs. 7(A,B)) a camera module, comprising:
an anamorphic lens (lens system 700) and a camera sensor (image sensor 710); and
a first computer vision logic comprising a first neural network that is trained to analyze first light information from a first portion of the camera sensor (image sensor 710), the first light information comprises non-demosaiced raw image data. (See ¶s 2-7 regarding object detection/tracking/identification (i.e. analyzing). See also:
¶ 102: “At step 1010, the process 1000 includes receiving the image at an image sensor (e.g., image sensor 710 of FIG. 7 A and FIG. 7B).”
¶ 103: “the process 1000 can be performed by vehicle computing system 250”
¶ 49: “the vehicle computing system 250 can include or can be implemented using... Neural Network Signal Processors (NSPs)...”.Note further ¶s 36-37, which describes image pre-processing for (anamorphic/cylindrical) distortion compensation as merely optional and even unfavorable – and also ¶ 71, discussing image processor 350 which may perform various optional tasks, including both the abovementioned pre-compensation of image distortion as well as demosaicing.
Knuttson therefore establishes a pipeline for object detection/analysis which applies neural networks to non-demosaiced raw image data.)
Regarding claim 16, Knutsson discloses the camera module of claim 13.
Knutsson further discloses:
where the anamorphic lens (lens system 700) bends light corresponding to a distance and an angle of a light source relative to the anamorphic lens (lens system 700) (Examiner notes that this limitation is always satisfied for any lens exposed to a light source. Standard optics holds that the bending (i.e. refracting) of light by any real lens will always be affected by – and thus correspond to – the relative displacement (distance, direction/angle) from the light source) and
the first computer vision logic is trained to determine the light source from the distance and the angle (see ¶s 2-7 regarding object detection/tracking/identification (i.e. determination of an object acting as a light source).
Regarding claim 17, Knutsson discloses the camera module of claim 13.
Knutsson further discloses where the camera module (comprising lens system 700 and image sensor 710) further comprises an interface configured to connect to an image signal processor (image processor 350) (see FIG. 3, ¶s 62-63; note ¶ 62: “lens 315 can correspond to a lens system (e.g., lens system 700 of FIG. 7A and FIG. 7B)”).
Regarding claim 18, modified Knutsson discloses the camera module of claim 13.
Knutsson further discloses where the camera module (comprising lens system 700 with image sensor 710) further comprises an image signal processor (image processor 350) (see FIG. 3, ¶s 62-63; note ¶ 62: “lens 315 can correspond to a lens system (e.g., lens system 700 of FIG. 7A and FIG. 7B)”).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1 is rejected under 35 U.S.C. 103 as being unpatentable over Knutsson and Kardash (US 20240323507 A1, hereinafter “Knutsson”) in view of Sudoh (US 20240019699 A1).
Regarding claim 1, Knutsson discloses a smart glasses apparatus, comprising:
a first anamorphic lens (lens system 700) configured to focus light onto a first camera sensor (image sensor 710), where the first camera sensor (image sensor 710) is configured to capture a first anamorphic image (see ¶s 88-91, FIG. 7(A,B)); and
a first computer vision logic comprising a first neural network that is trained on a first training set of first anamorphic images characterized by anamorphic lens artifacts to analyze a first object in a first portion of the first anamorphic image. (See ¶s 2-7 regarding object detection/tracking/identification (i.e. analyzing) of anamorphic images (with different magnifications along orthogonal axes). See also:
¶ 98: “process 1000 can include, at step 1002, receiving light at a lens system (e.g., lens system 700 of FIG. 7A and FIG. 7B)”
¶ 103: “the process 1000 can be performed by vehicle computing system 250”
¶ 49: “the vehicle computing system 250 can include or can be implemented using... Neural Network Signal Processors (NSPs)...”
Note lastly ¶s 36-37, which describes image pre-processing for (anamorphic/cylindrical) distortion compensation as merely optional and even unfavorable.
Overall, Knuttson establishes and motivates a pipeline for object detection/analysis which applies neural networks to uncompensated anamorphic images; this naturally indicates the machine learning model is also trained in this domain of operation – i.e. that it is also trained on anamorphic images characterized by anamorphic lens artifacts (“optical distortion characteristic”).)
Knutsson does not explicitly disclose a smart glasses apparatus comprising: a first anamorphic lens. (Examiner does note, however, that while Knutsson states “Examples are described herein using vehicles…”, Knutsson also states that their systems/techniques may be implemented in “… a wearable device, an extended reality device (e.g., a virtual reality (VR) device, an augmented (AR) device, or a mixed reality (MR) device) …”, which encompasses smart glasses; see ¶s 8 and 46)
Knutsson and Sudoh are commonly related to head-mounted extended reality devices with anamorphic lenses.
Sudoh explicitly discloses (see FIG. 1-2A, ¶s 39-58) a smart glasses apparatus (head-mounted display 1), comprising: a first anamorphic lens (lens portion 30).
It would have therefore been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teachings of Knutsson and Sudoh, in order to enable the reduction of light guide thicknesses in the head-mounted display to be implemented (Sudoh ¶ 37).
Claims 2 and 4-5 are rejected under 35 U.S.C. 103 as being unpatentable over Knutsson in view of Sudoh, as applied to claim 1 above, and in further view of Menadeva et al (US 20130279756 A1, hereinafter “Menadeva”).
Regarding claim 2, modified Knutsson discloses the smart glasses apparatus of claim 1.
Knutsson further discloses where the first anamorphic lens (lens system 700) is vertically oriented (see ¶ 39: “a first axis (e.g., a horizontal image axis) can have a greater magnification than a second axis (e.g., a vertical axis)” – i.e. a greater vertical field-of-view).
Modified Knutsson does not disclose that the first object comprises a hand.
Knutsson and Menadeva are commonly related to object detection/tracking/identification.
Menadeva discloses that the first object comprises a hand. (Menadeva discloses image processing and machine learning methods for hand tracking/identification; see ¶s 10-24, FIGs. 1-5.)
It would have therefore been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to further combine the teachings of Knutsson and Menadeva in order to address challenges associated with edge detection of moving objects and enable accurate gesture recognition for human-machine interfacing (Menadeva ¶s 3-10).
Regarding claim 4, modified Knutsson discloses the smart glasses apparatus of claim 1.
Sudoh further discloses (see FIG. 1) the smart glasses (head-mounted display 1) further comprising a second anamorphic lens (lens portion 3).
Knutsson further discloses the second anamorphic lens (lens system 700) configured to focus light onto a second camera sensor (image sensor 710), and where the second camera sensor (image sensor 710) is configured to capture a second anamorphic image (see ¶s 88-91, FIG. 7(A,B)). (Examiner notes this to be duplication of a limitation (A) addressed by Knutsson in claim 1 above – for the first anamorphic lens, first camera sensor, and first anamorphic image – which would automatically be satisfied upon implementing Knutsson’s teachings in binocular devices (i.e. in combination with Sudoh), such as smart glasses or other headsets, due to their bilateral symmetry.)
Modified Knutsson does not discloses the first computer vision logic is trained to identify a first hand from the first portion of the first anamorphic image and a second hand from a second portion of the second anamorphic image.
Knutsson and Menadeva are common related to object detection/tracking/identification.
Menadeva discloses the first computer vision logic is trained to identify a first hand from the first portion of the first anamorphic image and a second hand from a second portion of the second anamorphic image. (Menadeva discloses image processing and machine learning methods for hand tracking/identification; see ¶s 10-24, FIGs. 1-5.)
It would have therefore been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to further combine the teachings of Knutsson and Menadeva in order to address challenges associated with edge detection of moving objects and enable accurate gesture recognition for human-machine interfacing (Menadeva ¶s 3-10).
Regarding claim 5, modified Knutsson discloses the smart glasses apparatus of claim 4.
Sudoh further discloses (see FIG. 1) the first anamorphic lens (lens portion 3) and the second anamorphic lens (lens portion 3).
Knutsson further discloses where the first anamorphic lens and the second anamorphic lens are both vertically oriented (see ¶ 39: “a first axis (e.g., a horizontal image axis) can have a greater magnification than a second axis (e.g., a vertical axis)” – i.e. a greater vertical field-of-view).
Claim 3 is rejected under 35 U.S.C. 103 being unpatentable over Knutsson in view of Sudoh and Menadeva, as applied to claim 2 above, and in further view of Asbun et al (US 20220092308, hereinafter “Asbun”) and Niemasik and Plakal (WO 2019194906 A1, hereinafter “Niemasik”).
Regarding claim 3, modified Knuttson discloses the smart glasses apparatus of claim 2.
Modified Knuttson does not disclose a second computer vision logic that is trained to:
determine a region-of-interest at a second portion of the first anamorphic image based on a gaze point of a wearer at the second portion of the first anamorphic image; and
detect a face in the region-of-interest.
Knutsson and Asbun are commonly related to head-mounted extended reality devices and object detection/tracking/identification.
Asbun discloses a second computer vision logic that is trained to: determine a region-of-interest (ROI 812) at a second portion of the first anamorphic image based on a gaze point of a wearer at the second portion of the first anamorphic image. (see FIG. 8, ¶s 46-49: Asbun provides an AR system’s user 804 with gaze-point detection system 800 that detects their gaze direction and “determine[s] coordinates 810 of ROI subsystem 808”)
Knutsson and Niemasik are commonly related to object detection/tracking/identification with anamorphic imaging.
Niemasik discloses a second computer vision logic that is trained to: detect a face in the region-of-interest. (See ¶ 60, Niemasik discloses deep learning for detecting presence/location of one or more faces in an input image.)
It would have therefore been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to further combine the teachings of Knuttson and Asbun, in order facilitate modes of interactivity and/or user interfaces that may be controlled by the user’s direction of view (Asbun ¶s 32, 45-46).
It would have further been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to also combine the teachings of Knutsson and Niemasik, in order to perform image capture/analysis and object detection in a more device intelligent and less resource-intensive way (Niemasik ¶s 6-7, 37).
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Knutsson in view of Sudoh and Menadeva, as applied to claim 4 above.
Regarding claim 6, modified Knutsson discloses the smart glasses apparatus of claim 4.
Knutsson further discloses where the first anamorphic lens and the second anamorphic lens are both vertically oriented (see ¶ 39: “a first axis (e.g., a horizontal image axis) can have a greater magnification than a second axis (e.g., a vertical axis)” – i.e. a greater vertical field-of-view).
Modified Knutsson thus discloses an orientation angle of 90° (with respect to the horizontal axis) that is close to, but does not explicitly overlap with, those claimed (e.g. an orientation angles of >90° for a first/left lens and <90°for a second/right lens in the common Cartesian convention – where the first anamorphic lens and the second anamorphic lens are both obliquely oriented). Examiner finds, however, that no criticality has been established for such a range of orientation angles.
It would have therefore been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to further modify Knutsson by slightly adjusting orientation angles of the anamorphic lenses, in order to meet various common design needs – e.g. perspective correction, reducing optical crosstalk between lens portions in multi-lens (such as binocular) devices, controlling flare, etc.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Knutsson in view of Sudoh, as applied to claim 1 above, and in further view of Niemasik and Plakal (WO 2019194906 A1, hereinafter “Niemasik”).
Regarding claim 7, modified Knutsson discloses the smart glasses apparatus of claim 1.
Knutsson further discloses (see ¶ 91, 95-97; FIGs. 9(A,B)) where the first anamorphic lens (lens system 700) is horizontally oriented (¶ 97: “In one illustrative example, the magnification along the horizontal axis (e.g., the first image axis 715) can be half of the magnification along the vertical axis (e.g., the second image axis 720)” – i.e. a greater horizontal field-of-view).
Modified Knutsson does not disclose, the first object comprises an eye, and the first computer vision logic is trained to determine a gaze point from a position of the eye.
Knutsson and Niemasik are commonly related to object detection/tracking/identification with anamorphic imaging.
Niemasik discloses that the first object comprises an eye, and the first computer vision logic is trained to determine a gaze point from a position of the eye. (See ¶ 60, Niemasik discloses deep learning for detecting presence/location of one or more faces in an input image using “facial landmarks, such as a position of eyes”; Niemasik also discusses the use of gaze paths/angles in various routines – see ¶s 111 and 124.)
It would have therefore been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to further combine the teachings of Knutsson and Niemasik , in order to perform image capture/analysis and object detection in a more device intelligent and less resource-intensive way (Niemasik ¶s 6-7, 37).
Claims 8-9 and 11 are rejected under 35 U.S.C. 103 being unpatentable over Knutsson and Kardash (US 20240323507 A1, hereinafter “Knutsson”) in view of Sudoh (US 20240019699 A1), Niemasik and Plakal (WO 2019194906 A1, hereinafter “Niemasik”), and Shpunt (US 20180081178 A1).
Regarding claim 8, Knutsson discloses (see ¶s 88-91, FIG. 7(A,B)):
an anamorphic lens (lens system 700) configured to focus light onto a first camera sensor (image sensor 710), where the first camera sensor (image sensor 710) comprises an array of photosites (“array of photosensors”, see ¶ 102) that is enabled to capture a first image; and
a first computer vision logic that is trained to analyze a first object in the first image (see ¶s 2-7 regarding object detection/tracking/identification (i.e. analyzing)).
Knutsson does not disclose:
a smart glasses apparatus, comprising: an anamorphic lens
an array of photosites that is selectively enabled based on a user pupillary distance to capture a first image
Knutsson and Sudoh are commonly related to head-mounted extended reality devices with anamorphic lenses.
Sudoh discloses (see FIG. 1-2A, ¶s 39-58) a smart glasses apparatus (head-mounted display 1), comprising: an anamorphic lens (lens portion 30).
Knutsson and Niemasik are commonly related to object detection/tracking/identification with anamorphic imaging.
Niemasik discloses an array of photosites that is selectively enabled to capture a first image. (See ¶s 52, 181-186 – image sensor array 502, for example, has a sensor array size of 4000x300, and can perform binning/subsampling – directly affecting the selection of photosites which contribute towards imaging. See also ¶ 176 regarding cropping and scene analysis with apportioned low-/high-resolution imaging.)
Knutsson and Shpunt are commonly related to head-mounted extended reality devices and object detection/tracking/identification.
Shpunt discloses where the array of the photosites is selectively enabled based on a user pupillary distance. (See FIG. 1, ¶s 29-31; Shpunt describes foveated VR systems which track eye/head movement – i.e. (pupillary) distance from gaze tracking module 190 – and captures images with higher resolution in immediate/narrower fields-of-view and lower resolution in more peripheral fields-of-view.)
It would have therefore been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teachings of Knutsson and Sudoh, in order to enable the reduction of light guide thicknesses in the head-mounted display to be implemented (Sudoh ¶ 37).
It would have also been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to further combine the teachings of Knutsson and Niemasik, in order to perform image capture/analysis and object detection with improved device intelligence and in a less resource-intensive way (Niemasik ¶s 6-7, 37).
It would have then been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to also combine the teachings of Knutsson and Shpunt, in order to render image data more selectively so as to reduce overhead, cost, and workload (Shpunt ¶s 4-6).
Regarding claim 9, modified Knutsson discloses the smart glasses apparatus of claim 8.
Knutsson further discloses (see ¶ 91, 95-97; FIGs. 9(A,B)) where the anamorphic lens (lens system 700) is horizontally oriented (¶ 97: “In one illustrative example, the magnification along the horizontal axis (e.g., the first image axis 715) can be half of the magnification along the vertical axis (e.g., the second image axis 720)” – i.e. a greater horizontal field-of-view).
Niemasik further discloses that the first object comprises an eye. (See ¶ 60, Niemasik discloses deep learning for detecting presence/location of one or more faces in an input image using “facial landmarks, such as a position of eyes”; Niemasik also discusses the use of gaze paths/angles in various routines – see ¶s 111 and 124.)
Regarding claim 11, modified Knutsson discloses the smart glasses apparatus of claim 9.
Knutsson further discloses where the anamorphic lens (lens system 700) bends the light corresponding to a distance and an angle of a light source relative to the anamorphic lens (lens system 700). (Examiner notes that this limitation is always satisfied for any lens exposed to a light source. Standard optics holds that the bending (i.e. refracting) of light by any real lens will always be affected by – and thus correspond to – the relative displacement (distance, direction/angle) from the light source)
Claims 12 and 21 are rejected under 35 U.S.C. 103 being unpatentable over Knutsson in view of Sudoh, Niemasik, and Shpunt, as applied to claim 11 above, and in further view of Yu et al (US 20230194882 A1, hereinafter “Yu”))
Regarding claim 12, modified Knutsson discloses the smart glasses apparatus of claim 11.
Modified Knutsson does not disclose where the first computer vision logic is trained to disambiguate a first flare that corresponds to a pupil of the eye and a second flare that corresponds to a retina of the eye.
Knutsson and Yu are commonly related to head-mounted extended reality devices and object detection/tracking/identification.
Yu discloses where the first computer vision logic is trained to disambiguate a first flare that corresponds to a pupil (556) of the eye (550) and a second flare that corresponds to a retina (560) of the eye (550). (See FIG. 5, ¶s 98-99; Yu describes an eye-tracking system and notes that different eye portions scatter light differently (i.e. producing different flare or intensity patterns) and that these different signatures enable identification of different portions of the eye.)
It would have therefore been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to further combine the teachings of Knutsson and Yu, for reduced power consumption, cost efficiency, and improved accuracy in (eye-tracking) applications (Yu ¶s 49-55, 98).
Regarding claim 21, modified Knuttson discloses the smart glasses apparatus of claim 11.
Modified Knutsson does not disclose where the first computer vision logic is trained to disambiguate a first flare that corresponds to a pupil of the eye and a second flare that corresponds to a retina of the eye based on a differing anamorphic streak size and shape between the first flare and the second flare.
Knutsson and Yu are commonly related to head-mounted extended reality devices and object detection/tracking/identification.
Yu discloses where the first computer vision logic is trained to disambiguate a first flare that corresponds to a pupil (556) of the eye (550) and a second flare that corresponds to a retina (560) of the eye (550) (see FIG. 5, ¶s 98-99; Yu describes an eye-tracking system and notes that different eye portions scatter light differently (i.e. producing different flare or intensity patterns) and that these different signatures enable identification of different portions of the eye) based on a differing anamorphic streak size and shape between the first flare and the second flare (where anamorphism natural emerges when combining Yu with the prior cited art (i.e. with Knuttson and their anamorphic lens system 700) and where differing streak sizes/shapes are naturally encompassed by Yu’s different intensity patterns).
It would have therefore been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to further combine the teachings of Knutsson and Yu, for reduced power consumption, cost efficiency, and improved accuracy in (eye-tracking) applications (Yu ¶s 49-55, 98).
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Knutsson, as applied to claim 13 above
Regarding claim 14, Knutsson discloses the camera module of claim 13.
Knutsson further discloses a second computer vision logic that is trained to analyze a second light information from a second portion of the camera sensor (image sensor 710) (see ¶s 2-7 regarding detection/tracking/identification (i.e. analyzing). Examiner also notes that one may always freely distinguish a “first” and “second” portion of any object with finite dimensions. Similarly, one may always practically designate a “first” and “second” light for any amount of light which strikes a two-dimensional surface of the object).
Knuttson does not directly disclose the claimed comprising a second neural network. However, as noted with regards to claim 13 above (much of which the current claim 14 essentially duplicates and replaces “first” with “second” for), Knuttson discloses at least a first neural network. Thus, in the most conservative reading of Knuttson that would have only one neural network present – the claimed invention is effectively distinguishing over the prior art in mere matters of duplication of parts (along with any trivially associated organization/grouping of structure, machine instruction and logic), so as to employ two neural networks instead of one.
It would have therefore been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Knuttson by implementing two neural networks instead of one, in order to compartmentalize and/or parallelize tasks for improved efficiency, computing speeds, etc. – since it has been held that mere duplication of the essential working parts of a device involves only routine skill in the art. St. Regis Paper Co. v. Bernis Co., 193 USPQ 8. In re Harza, 274 F.2d 669, 124 USPQ 378 (CCPA 1960).
Claim 15 is rejected under 35 U.S.C. 103 being unpatentable over Knutsson, as applied to claim 13 above, in view of Shpunt (US 20180081178 A1).
Regarding claim 15, Knutsson discloses the camera module of claim 13.
Knutsson does not disclose where the first computer vision logic is further trained to deactivate a second portion of the camera sensor.
Knutsson and Shpunt are commonly related to head-mounted extended reality devices and object detection/tracking/identification.
Shpunt discloses where the first computer vision logic is further trained to deactivate a second portion of the camera sensor. (See FIG. 1, ¶s 29-31; Shpunt describes foveated VR systems which track eye/head movement and captures images with higher resolution in immediate/narrower fields-of-view and lower resolution in more peripheral fields-of-view.)
It would have therefore been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to further combine the teachings of Knutsson and Shpunt, in order to render image data more selectively so as to reduce overhead, cost, and workload (Shpunt ¶s 4-6).
Claim 20 is rejected under 35 U.S.C. 103 being unpatentable over Knutsson, as applied to claim 13 above, in view of Niemasik and Plakal (WO 2019194906 A1, hereinafter “Niemasik”).
Regarding claim 20, Knutsson discloses the camera module of claim 13.
Knutsson further discloses linear image data (see ¶s 95-97; FIGs. 9(A,B)).
Knutsson does not disclose where the first computer vision logic is trained on the linear image data.
Knutsson and Niemasik are commonly related to object detection/tracking/identification with anamorphic imaging.
Niemasik discloses where the first computer vision logic is trained on the linear image data. (See ¶s 131-134 discussing training/retraining of (e.g. neural network) models.)
It would have therefore been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teachings of Knutsson and Niemasik , in order to perform image capture/analysis and object detection in a more device intelligent and less resource-intensive way (Niemasik ¶s 6-7, 37).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Yang et al and Kumar et al (see PTO-892) are both directed to neural networks for computer vision applications and analyzing distorted images. They each recognize issues with, and explore solutions avoiding, image pre-processing/rectification. While the two references provide more specific focus on circular fisheye lenses, which are not “anamorphic” in the traditional sense, Examiner would expect similar efforts for anamorphic or cylindrically distorted lenses generally to be simpler, due to the preserved cartesian coordinate geometry being typically more accommodating.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WAI-GA D. HO whose telephone number is (571)270-1624. The examiner can normally be reached Monday through Friday, 10AM - 6PM E.T..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephone Allen can be reached at (571) 272-2434. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/W.D.H./Examiner, Art Unit 2872
/STEPHONE B ALLEN/Supervisory Patent Examiner, Art Unit 2872