Prosecution Insights
Last updated: April 19, 2026
Application No. 18/067,604

LENTICULAR IMAGE GENERATION

Non-Final OA §103
Filed
Dec 16, 2022
Examiner
CHEN, YU
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
3 (Non-Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
98%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
711 granted / 1052 resolved
+5.6% vs TC avg
Strong +30% interview lift
Without
With
+29.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
110 currently pending
Career history
1162
Total Applications
across all art units

Statute-Specific Performance

§101
2.2%
-37.8% vs TC avg
§103
43.9%
+3.9% vs TC avg
§102
27.0%
-13.0% vs TC avg
§112
20.7%
-19.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1052 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/12/2025 has been entered. Response to Amendment This is in response to applicant’s amendment/response filed on 11/21/2025, which has been entered and made of record. Claims 1-20 are pending in the application. Response to Arguments Applicant's arguments filed on 11/21/2025 have been fully considered but they are not persuasive. Applicant submits “The cited portions of Lange disclose "pairs" comprising only "left and right" stereo images and do not disclose "interleaved images" that require an interleaving pattern. Left/right pairs that provide stereo-corresponding images are merely depicting a side-by- side relationship and would not disclose an "interleaved images" wherein there are repeated patterns and arrangements of multiple images in the interleaving pattern. A "stereogram" does not provide multiple viewing angles, but rather (as defined in paragraph 57 of Lange) "compris[es] left and right images of the object" to emulate natural way of viewing three- dimensionally using binocular vision. Lenticular images comprising interleaved images provide a range of viewing angles (for example, "V1", "V2", "V3", as shown in FIGs.lA-1C of the Specification) via the interleaving of images for different viewing angles. The cited portions of Lange are silent with respect to "generate a plurality of lenticular images from texture information for the object obtained from at least one of the one or more sensors and the fixed mapping information, wherein the plurality of lenticular images comprise interleaved images associated with different viewing angles."” The examiner disagrees with Applicant’s premises and conclusion. “The plurality of lenticular images comprise interleaved images associated with different viewing angles” are well-known to one of ordinary skill in the art. Lange teaches CrystalEyes™ liquid crystal shutter glasses, are widely available, and the recently developed autostereoscopic displays (e.g. U.S. Pat. No. 6,118,584). An autostereoscopic display (e.g. using a lenticular screen) are known to have “the plurality of lenticular images comprise interleaved images associated with different viewing angles”. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-6, 12-15 are rejected under 35 U.S.C. 103 as being unpatentable over Lange (US Pub 2012/0182403 A1) in view of Van Berkel et al. (US Patent 6, 118, 584) and Pandey et al. (US Pub 2022/0014723 A1). As to claim 1, Lange discloses a device, comprising: a lenticular display (¶0073, “The substrate may be configured to present a stereoscopic representation of the object to a user without using stereoscopic eyewear. For example, the substrate may comprise material configured for such a purpose. In one embodiment, the substrate may comprise a lenticular screen.”); one or more sensors; memory; and a controller comprising one or more processors (¶0012, ¶0088, ¶0186), configured to: generate fixed mapping information for an object from texture and mesh information for the object obtained from the one or more sensors (¶0094, “A stereogram is a related pair of images, which have been captured or created in such as way as to give the appearance of depth when seen through an appropriate stereo viewer. The term substrate, as it is used here, refers to the digital or analog surface onto which the stereo imagery is mapped, rendered or projected.” ¶0186, “The standard method of mapping texture imagery onto an associated polygon or set of polygons is by using a special set of two-dimensional mapping coordinates, commonly referred to as 2D “texture coordinates.” For a given polygon, each vertex is assigned a pair of (U,V) texture mapping coordinates. For a set of three vertices (used to construct an individual polygon in the derived 3D substrate), the 3D vertices have a set of corresponding two-dimensionally plotted points on the left and right imagery. The positions of these plotted image points naturally correspond to the extracted polygonal vertices, by virtue of the initial perspective projection created by the cameras that were used to capture the original stereogram. The 3D polygon, therefore, is naturally projectively mapped into two-dimensional image space, and will also (if arranged correctly) be projected within the boundaries of a particular texture map.” ¶0202, “the substrate is composed of 3D data derived from measurements of the object itself, rather than from the stereogram that was used to record the object. This three-dimensional data can be gathered from a variety of sources, such as hand measurements, plans, diagrams, laser theodolite mapping, laser rangefinder scanning, etc. The derived points, which will function as zero parallax points, are used to construct the vertices of polygonal face sets or meshes. The relative orientation of the stereograms to the object of interest should be known. The orientation of the independently derived 3D data should also be known to a common reference frame for the original object and the camera stations that captured the original stereogram.” ¶0203, “It is then therefore possible, using standard projective transformation equations (Eqns 1.1-1.4), to project the 3D meshes, or their 3D vertices into the 2D image space of the left and right digitized images or photos. A set of 2D corresponding left and right image coordinates will be generated by this process. A set of texture maps can be defined for each left and right image. Therefore it is possible to convert the 2D corresponding left and right image coordinates into texture coordinates referenced to their respective texture map's position in the larger imagery. The whole compliment of data sets needed for a CSTM have then been created: one three-dimensional substrate, a left set of texture coordinates and texture maps, and a right set of texture coordinates and texture maps.”); store the fixed mapping information to the memory (¶0184, “Generally, real-time systems and their associated graphics hardware (i.e., a graphics card with dedicated texture memory) more readily accept arrays of images” ¶0219, “the left and right data sets are always in computer memory”); generate a plurality of lenticular images from texture information for the object obtained from at least one of the one or more sensors and the fixed mapping information (¶0068, “determining a perspective centre of each of the views of the stereogram (e.g. rear nodal point of a camera lens used to each image of the stereogram).” ¶0153, “Before the stereo plotted points can be converted into a 3D polygonal mesh, one must determine for the left and right cameras their spatial position and orientation and the effective calibrated focal length of the lenses used. Preferably camera calibration data should also be used, such as the radial and tangential distortion of the lenses, as well as the coordinates for the intersection point of the axis of the lens to the coordinate system of the image plane. Additionally, a 2D affine transformation needs to be found or determined for the conversion of the plotted vertices of the left and right meshes (in plotter coordinates) to image frame coordinates (i.e., the actual spatial x and y coordinates referenced to the original photo frames).” ¶0165, “If the radial distortion of the lenses is compensated, then models of a very reasonable spatial fidelity can be achieved. The derived points are then used to form the surfaces of the polygonal substrate and the usual processes for the CSTM are carried out to calculate the correct texture coordinates.” ¶0253, “an analog CSTM would involve creating a simple three-dimensional substrate capable of presenting separate stereo views to the left and right eyes without specialist eyewear. In other words, the substrate itself would comprise an autostereoscopic display (e.g. using a lenticular screen), with the stereo imagery projected, rendered, or printed onto it, as appropriate.” ¶0285.); and provide the lenticular images to the lenticular display (¶0073, “the substrate may comprise a lenticular screen.” ¶0253, “the substrate itself would comprise an autostereoscopic display (e.g. using a lenticular screen), with the stereo imagery projected, rendered, or printed onto it, as appropriate.”). Lange does not explicitly disclose the plurality of lenticular images comprise interleaved images associated with different viewing angles. However, Lange suggests autostereoscopic displays (using a lenticular screen) are well known in the background portion. Lange further provides Van Berkel et al. U.S. Pat. No. 6,118,584 as an example. Van Berkel et al. teaches the plurality of lenticular images comprise interleaved images associated with different viewing angles (Van Berkel, Fig. 4, Col 6, lines 48-66, “a lenticular screen 15 having an array of parallel optically cylindrically converging lenticules indicated at 16 in FIG. 3, each of which extends in the column direction and overlies a respective column of display element groups, that is, four columns of display elements.” “The display produced comprises interleaved 2D sub-images which can be seen by the left and right eye of a viewer and constituted by the outputs from respective columns of display elements.” “As the viewer's head moves in the row, X, direction (FIG. 3) that is, up and down in FIG. 4, then three stereoscopic images can be viewed, as provided by the beams 26 and 27, 27 and 28, and 28 and 29 respectively. The display elements 12 in a group are in effect substantially contiguous with one another in row direction X. Dark regions separating the output beams are eliminated and continuous horizontal parallax is obtained.”). Lange and Van Berkel are considered to be analogous art because all pertain to image display. It would have been obvious before the effective filing date of the claimed invention to have modified Lange with the features of “the plurality of lenticular images comprise interleaved images associated with different viewing angles” as taught by Van Berkel. The claim would have been obvious because a particular known technique was recognized as part of the ordinary capabilities of one skilled in the art. Pandey also teaches the plurality of lenticular images comprise interleaved images associated with different viewing angles (Pandey, Fig. 25, ¶0155, “The autostereoscopic displays employ optical components to achieve a 3D effect for a variety of different images on the same plane and providing such images from a number of points of view to produce the illusion of 3D space.” ¶0158, “the content may be displayed by interleaving a left image 2504A with a right image 2504B to obtain an output image 2505.” “at a particular viewing condition, the left eye of the user views a first subset of pixels associated with an image, as shown by viewing rays 2510, while the right eye of the user views a mutually exclusive second subset of pixels, as shown by viewing rays 2512.” ¶0177, “employs multiview stereo in an end-to-end fashion to generate intermediate view of city landscapes.”) Lange and Pandey are considered to be analogous art because all pertain to image display. It would have been obvious before the effective filing date of the claimed invention to have modified Lange with the features of “the plurality of lenticular images comprise interleaved images associated with different viewing angles” as taught by Pandey. The claim would have been obvious because a particular known technique was recognized as part of the ordinary capabilities of one skilled in the art. As to claim 2, claim 1 is incorporated and the combination of Lange, Van Berkel and Pandey discloses the lenticular display comprises: a display panel; and a lenticular lens attached to a surface of the display panel (Lange, ¶0253, “the substrate itself would comprise an autostereoscopic display (e.g. using a lenticular screen), with the stereo imagery projected, rendered, or printed onto it, as appropriate.”). As to claim 3, claim 2 is incorporated and the combination of Lange, Van Berkel and Pandey discloses the display panel is one of a liquid crystal display (LCD), an organic light-emitting diode (OLED) technology display, a DLP (digital light processing) technology display, or an LCoS (liquid crystal on silicon) technology display (Lange, ¶0006, “Improved forms of stereo eyewear, such as CrystalEyes™ liquid crystal shutter glasses, are widely available, and the recently developed autostereoscopic displays (e.g. U.S. Pat. No. 6,118,584)” ¶0173, “LCD shutter glasses are used (such as CrystalEyes™ eye wear) that alternately show the left and right images to their respective eyes.” ¶0217, “With the use of special eye wear such as LCD shutter glasses (e.g. CrystalEyes™)”). As to claim 4, claim 2 is incorporated and the combination of Lange, Van Berkel and Pandey discloses the lenticular display displays individual ones of the lenticular images on the display panel, and wherein the lenticular lens is configured to provide three-dimensional virtual views of the object displayed on the display panel from multiple different viewing angles (Lange, ¶0006, “Improved forms of stereo eyewear, such as CrystalEyes™ liquid crystal shutter glasses, are widely available, and the recently developed autostereoscopic displays (e.g. U.S. Pat. No. 6,118,584)” ¶0253, “the substrate itself would comprise an autostereoscopic display (e.g. using a lenticular screen), with the stereo imagery projected, rendered, or printed onto it, as appropriate.”). As to claim 5, claim 1 is incorporated and the combination of Lange, Van Berkel and Pandey discloses to generate fixed mapping information for an object from texture and mesh information for the object, the controller is configured to: render UV map views for multiple viewing angles of the lenticular display from the obtained texture and mesh information (Lange, ¶0186, “The standard method of mapping texture imagery onto an associated polygon or set of polygons is by using a special set of two-dimensional mapping coordinates, commonly referred to as 2D “texture coordinates.” For a given polygon, each vertex is assigned a pair of (U,V) texture mapping coordinates. For a set of three vertices (used to construct an individual polygon in the derived 3D substrate), the 3D vertices have a set of corresponding two-dimensionally plotted points on the left and right imagery. The positions of these plotted image points naturally correspond to the extracted polygonal vertices, by virtue of the initial perspective projection created by the cameras that were used to capture the original stereogram. The 3D polygon, therefore, is naturally projectively mapped into two-dimensional image space, and will also (if arranged correctly) be projected within the boundaries of a particular texture map.” ¶0187, “It is therefore a simple matter to convert the two-dimensional plotted coordinates for the projected polygon into texture-mapping coordinates, assuming the spatial position (20.07, 20.08) of the sub-rectangle of pixels that constitute the texture map is defined or known. Generally, texture coordinates are of a parametric form, meaning that the values for the position of an individual texture coordinate are scaled from 0 to a maximum value of 1. FIG. 20.05 shows the position of a left plotted image point. Here it can be seen that the X and Y coordinates of the image point (20.05) correspond to U and V coordinates within the frame of the texture map (20.03). Relative to the position of the left texture map, a left set of texture coordinates are calculated for the plotted left hand image points. Similarly a set of right hand texture coordinates are calculated from the positions of the right hand stereo plotted points with respect to the position of the right texture map in the right image. We now therefore arrive at a complete set of elements from which a CSTM can be composed or rendered.” ¶0191-0199.); generate view maps from a ray tracing model and calibration data for the lenticular display (Lange, ¶0104, “where the position of the screen has been adjusted so that the rays projecting from one pair of left and right image points (7.07b 7.08b) corresponding to object point 7.01B now converge perfectly at the surface of the screen, reducing the surface parallax for that point pair to zero (7.03). If this single large screen were to be replaced by a series of small screens, each set at the exact location where a specially selected pair of corresponding image rays intersect in three dimensional space, then each of these specially selected pairs of points would have their surface parallaxes eliminated.” ¶0108, “For each selected image point (e.g. 10.05a, 10.06a), a ray is then projected through the respective camera's perspective center (10.02, 10.03), and calculations (see Eqns 1.5-1.30) are performed to determine the point at which the rays from corresponding left and right image points would intersect in three-dimensional space (e.g. 10.01A). This hypothetical value is referred to here as the stereo ray intersection point, and in theory it represents the location on the original stereo-recorded object (10.01) which gave rise to the pair of corresponding image points in the stereogram.” ¶0110, “the stereo ray intersection points will be calculated from specially plotted points in the stereo imagery, and these values will determine the placement of the vertices in the three-dimensional substrate, so that each vertex represents a zero parallax point. However, it is also possible to construct the substrate first, based on data from sources other than the stereo imagery, and then use the vertices (which have been chosen to serve as zero parallax points) as the hypothetical location of the stereo ray intersection points, from which the location of the corresponding image points can be calculated (or, in some applications, “forced” into compliance).” ¶0111, “Note that these points have been placed at the locations where pairs of stereo corresponding rays intersect in three-dimensional space, and also that the position of the vertices accurately reflects the position of the original object point on the surface of the stereo-recorded object (11.02). Since this substrate (11.01) is only an approximation of the original object, the surface parallax has only been eliminated for some of the pairs of image points, i.e., those whose rays meet at the surface of the substrate. This includes those points which have been specifically calculated as zero parallax points (11.01A, B, C) as well as others which just happen to intersect at the surface of the substrate (e.g. 11.08), which may be referred to as “incidental” zero parallax points. However, there are many more pairs of image points whose rays would intersect at various points in front of or behind the substrate (e.g. 11.09).” ¶0199, “This true projective mapping for texels that do not have explicit texture coordinates created for them is further demonstrated by projecting a ray from a corresponding point on the surface of the stereo-recorded object (24.19) to the perspective center (24.10) of the left image from the original stereogram. It can be seen that this ray passes through the corresponding point (24.12) on the substrate's surface and the point for sampling texels on the texture map (24.13). This sampling can be carried out by the rendering engine without any direct knowledge of the original object point or the 3D position of the perspective center of the left image.”); and generate a lenticular to UV map for the object from the UV map views and the view maps (Lange, ¶0158, “As its name suggests, the “image-derived” method uses data extracted from the original stereograms to determine the shape of the substrate. Since the vertices of the substrate must be placed so that they will function as zero parallax points when the stereo imagery (in the form of texture maps) is applied, it is necessary to determine the location where selected pairs of stereo rays intersect in three-dimensional space. However, even when a stereogram is physically projected into space (e.g., using an optical stereo projection system) it is not normally possible to see or experience where a projected pair of rays intersect. The intersection point must therefore be determined indirectly through the knowledge of certain parameters governing the ray geometry of the stereo imagery.” ¶0172, “Normally when rays from three dimensional points project through the perspective center of an imaging system, the image formed in the camera is essentially flipped both horizontally and vertically. It is customary to present the images as diapositives (i.e., right way up) on a stereo viewing screen. The projective geometry of the diapositive is the same as that of the negative except for the fact that the diapositive lies in front of the perspective center on the imaging system as depicted in FIG. 16. The perspective centers for the left and right diapositives (16.04, 16.05) lie behind the plane of the imagery.” ¶0180, “The true image coordinates may then be adjusted for radial distortion and other calibrated offsets and systematic errors (Eqns 1.30-1.31).” ¶0236, “the stereo ray compliant CSTM can be transformed into the same shape as the arbitrary substrate created from the left or right plotted 2D mesh.”); wherein the fixed mapping information includes the lenticular to UV map (Lange, ¶0186, “For a given polygon, each vertex is assigned a pair of (U,V) texture mapping coordinates.” ¶0235, “The texture mapping is carried out using the same texture coordinates as that of the stereo ray compliant substrate (26.01), resulting in the same stereo image points being mapped to the same corresponding vertices (zero parallax points) on the arbitrary substrate (26.03).” ¶0240, “This is true regardless of whether the zero parallax points are those specially selected to serve as vertices of the polygonal substrate or whether they are “coincidental” zero parallax points occurring between the vertices, where pairs of stereo rays happen to converge at the surface of the substrate.” ¶0255, ¶0270-0288.). As to claim 6, claim 1 is incorporated and the combination of Lange, Van Berkel and Pandey discloses to generate a plurality of lenticular images from texture information for the object obtained from at least one of the one or more sensors and the fixed mapping information, the controller is configured to, for each lenticular image to be generated: obtain current texture information for the object from the at least one sensor (Lange, ¶0094, “A stereogram is a related pair of images, which have been captured or created in such as way as to give the appearance of depth when seen through an appropriate stereo viewer. The term substrate, as it is used here, refers to the digital or analog surface onto which the stereo imagery is mapped, rendered or projected.” ¶0186, “The standard method of mapping texture imagery onto an associated polygon or set of polygons is by using a special set of two-dimensional mapping coordinates, commonly referred to as 2D “texture coordinates.” For a given polygon, each vertex is assigned a pair of (U,V) texture mapping coordinates. For a set of three vertices (used to construct an individual polygon in the derived 3D substrate), the 3D vertices have a set of corresponding two-dimensionally plotted points on the left and right imagery. The positions of these plotted image points naturally correspond to the extracted polygonal vertices, by virtue of the initial perspective projection created by the cameras that were used to capture the original stereogram. The 3D polygon, therefore, is naturally projectively mapped into two-dimensional image space, and will also (if arranged correctly) be projected within the boundaries of a particular texture map.” ¶0202, “the substrate is composed of 3D data derived from measurements of the object itself, rather than from the stereogram that was used to record the object. This three-dimensional data can be gathered from a variety of sources, such as hand measurements, plans, diagrams, laser theodolite mapping, laser rangefinder scanning, etc. The derived points, which will function as zero parallax points, are used to construct the vertices of polygonal face sets or meshes. The relative orientation of the stereograms to the object of interest should be known. The orientation of the independently derived 3D data should also be known to a common reference frame for the original object and the camera stations that captured the original stereogram.” ¶0203, “It is then therefore possible, using standard projective transformation equations (Eqns 1.1-1.4), to project the 3D meshes, or their 3D vertices into the 2D image space of the left and right digitized images or photos. A set of 2D corresponding left and right image coordinates will be generated by this process. A set of texture maps can be defined for each left and right image. Therefore it is possible to convert the 2D corresponding left and right image coordinates into texture coordinates referenced to their respective texture map's position in the larger imagery. The whole compliment of data sets needed for a CSTM have then been created: one three-dimensional substrate, a left set of texture coordinates and texture maps, and a right set of texture coordinates and texture maps.”); and generate the lenticular image by sampling pixels from the current texture information based on the fixed mapping information (Lange, ¶0010, “The basic process for converting these mathematically calculated projections and transformations into pixels on a screen is called rendering. Hardware and software systems do this by determining what color each screen pixel should be, based on the final summation of all of the various instructions for that point, such as lighting, shading, texturing, etc.” ¶0068, “determining a perspective centre of each of the views of the stereogram (e.g. rear nodal point of a camera lens used to each image of the stereogram).”¶0098, “This effect is possible because the substrate of a CSTM is itself a three-dimensional facsimile of the original object, constructed using measurements derived either from the stereo imagery or from the object itself. The stereograms are then mapped onto this facsimile by matching a specific subsample of stereo image points to their corresponding points on the facsimile. The process of generating the substrate and applying the imagery to it is referred to as coherent stereo-texturing.” ¶0153, “Before the stereo plotted points can be converted into a 3D polygonal mesh, one must determine for the left and right cameras their spatial position and orientation and the effective calibrated focal length of the lenses used. Preferably camera calibration data should also be used, such as the radial and tangential distortion of the lenses, as well as the coordinates for the intersection point of the axis of the lens to the coordinate system of the image plane. Additionally, a 2D affine transformation needs to be found or determined for the conversion of the plotted vertices of the left and right meshes (in plotter coordinates) to image frame coordinates (i.e., the actual spatial x and y coordinates referenced to the original photo frames).” ¶0165, “If the radial distortion of the lenses is compensated, then models of a very reasonable spatial fidelity can be achieved. The derived points are then used to form the surfaces of the polygonal substrate and the usual processes for the CSTM are carried out to calculate the correct texture coordinates.” ¶0192, “the screen image points that correspond to the image points in the texture imagery are correctly sampled and calculated in real-time.”). As to claim 12, the combination of Lange, Van Berkel and Pandey discloses a method, comprising: performing, by one or more processors of a device in an off-line process: generating fixed mapping information for an object from texture and mesh information for the object obtained from one or more sensors; storing the fixed mapping information to the memory; performing, by the one or more processors of the device in a real-time process: generating a plurality of lenticular images from texture information for the object obtained from at least one of the one or more sensors and the fixed mapping information, wherein the plurality of lenticular images comprise interleaved images associated with different viewing angles; and providing the lenticular images to a lenticular display (See claim 1 for detailed analysis.). As to claim 13, claim 12 is incorporated and the combination of Lange, Van Berkel and Pandey discloses the lenticular display includes a display panel and a lenticular lens attached to a surface of the display panel, the method further comprising displaying, by the lenticular display, individual ones of the lenticular images on the display panel, wherein the lenticular lens is configured to provide three-dimensional virtual views of the object displayed on the display panel from multiple different viewing angles (See claim 4 for detailed analysis.). As to claim 14, claim 12 is incorporated and the combination of Lange, Van Berkel and Pandey discloses generating fixed mapping information for an object from texture and mesh information for the object obtained from one or more sensors comprises: rendering UV map views for multiple viewing angles of the lenticular display from the obtained texture and mesh information; generating view maps from a ray tracing model and calibration data for the lenticular display; and generating a lenticular to UV map for the object from the UV map views and the view maps; wherein the fixed mapping information includes the lenticular to UV map (See claim 5 for detailed analysis.). As to claim 15, claim 12 is incorporated and the combination of Lange, Van Berkel and Pandey discloses generating a plurality of lenticular images from texture information for the object obtained from at least one of the one or more sensors and the fixed mapping information comprises, for each lenticular image to be generated: obtaining current texture information for the object from the at least one sensor; and generating the lenticular image by sampling pixels from the current texture information based on the fixed mapping information (See claim 6 for detailed analysis.). Claims 7-8, 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Lange (US Pub 2012/0182403 A1) in view of Van Berkel et al. (US Patent 6, 118, 584) and Pandey et al. (US Pub 2022/0014723 A1), further in view of Aguirre-Valencia et al. (US Pub 2020/0265594 A1). As to claim 7, claim 6 is incorporated and Lange does not explicitly disclose the fixed mapping information includes a lenticular to UV map that maps subpixels in the current texture information to subpixels of the lenticular display. Aguirre-Valencia teaches maps subpixels in the current texture information to subpixels of the lenticular display (Aguirre-Valencia, ¶0194, “When the preceding embodiments are used in conjunction with a lenticular array display or a parallax-barrier-type display, the computer system may perform so-called ‘pixel mapping’ or ‘dynamic subpixel layout’ (DSL). This is illustrated in FIG. 17, which presents a drawing illustrating a side view of a lenticular array display 1700. As described further below with reference to FIGS. 18-22, when generating stereoscopic images, the computer system may position a current rendered image in pixels (such as pixel 1712) in an LCD panel on the display, so that the optics sends or directs the current rendered image to an eye of interest (such as the left or right eye). The pixel mapping may be facilitated by a combination of head or gaze tracking, knowledge of the display geometry and mixing of the current rendered image on a subpixel level (such as for each color in an RGB color space). For example, the current rendered image may be displayed in pixels corresponding to the left eye 60% of the time and in pixels corresponding to the right eye 40%. This pixel-based duty-cycle weighting may be repeated for each color in the RGB color space. Note that the duty-cycle weighting may be determined by the position of which ever eye (left of right) that is closest to the optical mapping of a display lens (such as lens 1710) and the current rendered image. In some embodiments, a left or right projection matrix is used to define how the rays from the current rendered image relate to a tracked left or right eye. Thus, based at least in part on the position of the left and right eyes relative to lenticular array display 1700, the computer system may give more duty-cycle weighting to the left eye or the right eye.”). Lange and Aguirre-Valencia are considered to be analogous art because all pertain to image display. It would have been obvious before the effective filing date of the claimed invention to have modified Lange with the features of “maps subpixels in the current texture information to subpixels of the lenticular display” as taught by Aguirre-Valencia. The suggestion/motivation would have been so that the optics sends or directs the current rendered image to an eye of interest (Aguirre-Valencia, ¶0194). As to claim 8, claim 7 is incorporated and the combination of Lange and Aguirre-Valencia discloses the controller is further configured to: detect one or more persons in front of the lenticular display; and generate the lenticular image by sampling only subpixels from the current texture information that correspond to viewing angles of the detected one or more persons (Aguirre-Valencia, ¶0194, “The pixel mapping may be facilitated by a combination of head or gaze tracking, knowledge of the display geometry and mixing of the current rendered image on a subpixel level (such as for each color in an RGB color space). For example, the current rendered image may be displayed in pixels corresponding to the left eye 60% of the time and in pixels corresponding to the right eye 40%. This pixel-based duty-cycle weighting may be repeated for each color in the RGB color space. Note that the duty-cycle weighting may be determined by the position of which ever eye (left of right) that is closest to the optical mapping of a display lens (such as lens 1710) and the current rendered image. In some embodiments, a left or right projection matrix is used to define how the rays from the current rendered image relate to a tracked left or right eye. Thus, based at least in part on the position of the left and right eyes relative to lenticular array display 1700, the computer system may give more duty-cycle weighting to the left eye or the right eye.” ¶0202, “The inputs in the DSL technique may be a stereo image pair (left and right images), the display and lens parameters, and the 3D head or eye positions of the user or viewer.” ¶0203, “the 3D eye positions (e.sub.p), which may be obtained by a head or eye tracker (e.g., in terms of the camera coordinates), may be transformed to display coordinates.”). As to claim 16, claim 15 is incorporated and the combination of Lange and Aguirre-Valencia discloses the fixed mapping information includes a lenticular to UV map that maps subpixels in the current texture information to subpixels of the lenticular display (See claim 7 for detailed analysis.). As to claim 17, claim 16 is incorporated and the combination of Lange and Aguirre-Valencia discloses detecting one or more persons in front of the lenticular display; and generating the lenticular image by sampling only subpixels from the current texture information that correspond to viewing angles of the detected one or more persons (See claim 8 for detailed analysis.). Claims 9-11, 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Lange (US Pub 2012/0182403 A1) in view of Van Berkel et al. (US Patent 6, 118, 584) and Pandey et al. (US Pub 2022/0014723 A1), further in view of Mccombe et al. (US Pub 2025/0036194 A1) As to claim 9, claim 1 is incorporated and Lange does not disclose the device is a head-mounted device (HMD) of a computer-generated reality (CGR) system, and wherein the object is at least a portion of a face of a user of the HMD. Mccombe teaches the device is a head-mounted device (HMD) of a computer-generated reality (CGR) system, and wherein the object is at least a portion of a face of a user of the HMD (Mccombe, ¶0098, “displaying images to a user utilizing a binocular stereo head-mounted display (HMD).” ¶0240, “binocular stereo displays (such as the commercially available Oculus Rift) can be employed used, or still further, a lenticular type display can be employed, to allow auto-stereoscopic viewing.” ¶0079, “estimating a location of the first user's head or eyes, thereby generating tracking information; wherein the reconstructing of a synthetic view of the second user comprises reconstructing the synthetic view based on the generated data representation and the generated tracking information; and wherein 3D image reconstruction is executed by warping a 2D image by utilizing the control points, by sliding a given pixel along a head movement vector at a displacement rate proportional to disparity, based on the tracking information and disparity values.”). Lange and Mccombe are considered to be analogous art because all pertain to image display. It would have been obvious before the effective filing date of the claimed invention to have modified Lange with the features of “the device is a head-mounted device (HMD) of a computer-generated reality (CGR) system, and wherein the object is at least a portion of a face of a user of the HMD” as taught by Mccombe. The claim would have been obvious because the technique for improving a particular class of devices was part of the ordinary capabilities of a person of ordinary skill in the art, in view of the teaching of the technique for improvement in other situations. As to claim 10, claim 9 is incorporated and the combination of Lange and Mccombe discloses the controller is further configured to: detect movement of the HMD on the user’s head; and in response to the movement, regenerate fixed mapping information for the user’s face from texture and mesh information for the user’s face obtained from the one or more sensors (Mccombe, ¶0079, “estimating a location of the first user's head or eyes, thereby generating tracking information; wherein the reconstructing of a synthetic view of the second user comprises reconstructing the synthetic view based on the generated data representation and the generated tracking information; and wherein 3D image reconstruction is executed by warping a 2D image by utilizing the control points, by sliding a given pixel along a head movement vector at a displacement rate proportional to disparity, based on the tracking information and disparity values.” ¶0084, “A related practice of the invention further includes rotating the source image and control point coordinates so as to align the view vector to image scanlines; iterating through each scanline and each control point for a given scanline, generating a line element beginning and ending at each control point in 2D image space, with the addition of the corresponding disparity value multiplied by the corresponding view vector magnitude with the corresponding x-axis coordinate; assigning a texture coordinate to the beginning and ending points of each generated line element, equal to their respective, original 2D location in the source image; and interpolating texture coordinates linearly along each line element; thereby to create a resulting image in which image data between the control points is linearly stretched.” ¶0393, “Using conventional head tracking methods, a system in accordance with the invention can establish an estimate of the viewer's head or eye location and/or orientation. With this information and the disparity values acquired from feature correspondence or within the transmitted control point stream, the system can slide the pixels along the head movement vector at a rate that is proportional to the disparity. As such, the disparity forms the radius of a “sphere” of motion for a given feature.” ¶0411, “Other practices of the invention can include a 2D crop based on head location (see the discussion above relating to head tracking), and rectification transforms for texture coordinates. Those skilled in the art will understand that the invention can be practiced in connection with conventional 2D displays, or various forms of head-mounted stereo displays (HMDs), which may include binocular headsets or lenticular displays.” ¶0569-0575.). As to claim 11, claim 9 is incorporated and the combination of Lange and Mccombe discloses the controller is further configured to: detect movement of the HMD on the user’s head; and move a window within the fixed mapping information based on the detected movement (Mccombe, ¶0079, “estimating a location of the first user's head or eyes, thereby generating tracking information; wherein the reconstructing of a synthetic view of the second user comprises reconstructing the synthetic view based on the generated data representation and the generated tracking information; and wherein 3D image reconstruction is executed by warping a 2D image by utilizing the control points, by sliding a given pixel along a head movement vector at a displacement rate proportional to disparity, based on the tracking information and disparity values.” ¶0084, “A related practice of the invention further includes rotating the source image and control point coordinates so as to align the view vector to image scanlines; iterating through each scanline and each control point for a given scanline, generating a line element beginning and ending at each control point in 2D image space, with the addition of the corresponding disparity value multiplied by the corresponding view vector magnitude with the corresponding x-axis coordinate; assigning a texture coordinate to the beginning and ending points of each generated line element, equal to their respective, original 2D location in the source image; and interpolating texture coordinates linearly along each line element; thereby to create a resulting image in which image data between the control points is linearly stretched.” ¶0393, “Using conventional head tracking methods, a system in accordance with the invention can establish an estimate of the viewer's head or eye location and/or orientation. With this information and the disparity values acquired from feature correspondence or within the transmitted control point stream, the system can slide the pixels along the head movement vector at a rate that is proportional to the disparity. As such, the disparity forms the radius of a “sphere” of motion for a given feature.” ¶0411, “Other practices of the invention can include a 2D crop based on head location (see the discussion above relating to head tracking), and rectification transforms for texture coordinates. Those skilled in the art will understand that the invention can be practiced in connection with conventional 2D displays, or various forms of head-mounted stereo displays (HMDs), which may include binocular headsets or lenticular displays.” ¶ 0569, 671: Reconstruct synthetic view based on data representation and tracking information; execute 3d image reconstruction by warping 2D image, using control points: sliding given pixel along a head movement vector at a displacement rate proportional to disparity, based on tracking information and disparity values; ¶0570, 672: (wherein disparity values are acquired from feature correspondence function or control point data stream); ¶0571, 673: (Use tracking information to control 2D crop box: synthetic view is reconstructed based on view origin, and then cropped and scaled to fill user's display screen view window; define minima and maxima of crop box as function of user's head location with respect to display screen and dimensions of display screen view window).). As to claim 18, claim 12 is incorporated and the combination of Lange and Mccombe discloses the device is a head-mounted device (HMD) of a computer-generated reality (CGR) system, and, wherein the object is at least a portion of a face of a user of the HMD (See claim 9 for detailed analysis.). As to claim 19, claim 18 is incorporated and the combination of Lange and Mccombe discloses detecting movement of the HMD on a user’s head; and in response to the movement, regenerating the fixed mapping information for the user’s face from texture and mesh information for the user’s face obtained from the one or more sensors (See claim 10 for detailed analysis.). As to claim 20, claim 18 is incorporated and the combination of Lange and Mccombe discloses detecting movement of the HMD on a user’s head; and moving a window within the fixed mapping information based on the detected movement (See claim 11 for detailed analysis.). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to YU CHEN whose telephone number is (571)270-7951. The examiner can normally be reached on M-F 8-5 PST Mid-day flex. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached on 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YU CHEN/Primary Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Dec 16, 2022
Application Filed
Jun 02, 2025
Non-Final Rejection — §103
Sep 04, 2025
Response Filed
Sep 27, 2025
Final Rejection — §103
Nov 21, 2025
Response after Non-Final Action
Dec 12, 2025
Request for Continued Examination
Jan 13, 2026
Response after Non-Final Action
Mar 10, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604497
THIN FILM TRANSISTOR AND ARRAY SUBSTRATE
2y 5m to grant Granted Apr 14, 2026
Patent 12597176
IMAGE GENERATOR AND METHOD OF IMAGE GENERATION
2y 5m to grant Granted Apr 07, 2026
Patent 12589481
TOOL ATTRIBUTE MANAGEMENT IN AUTOMATED TOOL CONTROL SYSTEMS
2y 5m to grant Granted Mar 31, 2026
Patent 12588347
DISPLAY DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12586265
LINE DRAWING METHOD, LINE DRAWING APPARATUS, ELECTRONIC DEVICE, AND COMPUTER READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
98%
With Interview (+29.9%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 1052 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month