Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/28/26 has been entered.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 9, 12, 16, 22, 23, 25, and 26 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication 2011/0254840 A1 (hereinafter Halstead) in view of “Gradient-Domain Processing within a Texture Atlas” in view of Fabian Prada, et al. (hereinafter Prada) in view of “Mesh color sharpening” by Zinat Afrose, et al. (hereinafter Afrose) in view of U.S. Patent Application 2008/0056573 A1 (hereinafter Matsuda) in view of U.S. Patent Application Publication 2019/0261024 A1 (hereinafter Livshitz) in view of “An adaptive split-and-merge method for binary image contour data compression” by Yi Xiao, et al. (hereinafter Xiao).
Regarding claim 1, the limitations “A three-dimensional (3D) image modeling system configured to automatically generate photorealistic, virtual 3D package and product models from two-dimensional (2D) imaging assets and dimensional data, the 3D image modeling system comprising: one or more processors; an imaging asset manipulation script comprises instructions configured to execute on the one or more processors; and a memory configured to store 2D imaging assets and dimensional datasets accessible by the one or more processors and the computing instructions of the imaging asset manipulation script” are taught by Halstead (Halstead, e.g. abstract, paragraphs 10-15, 46-95, describes a system for automatically generating virtual 3D product models from 2D images, dimensional data, and shape class information for a set of products, where the system includes processors executing stored programs, i.e. imaging asset manipulation script(s), e.g. paragraphs 47, 90-94, and the 2D image assets and dimensional data sets are stored in a database in the system, e.g. paragraphs 48, 49, table 1.)
The limitations “wherein the computing instructions of the imaging asset manipulation script, when executed by the one or more processors, cause the one or more processors to: obtain a shape classification defining a real-world product or product package to be virtually modeled in 3D space, obtain a dimensional dataset defining product or package measurements of the real-world product or product package to be virtually modeled in 3D space, obtain a 2D image asset, where the 2D image asset is selected from the 2D imaging assets, and wherein the 2D image asset depicts the real-world product or product package, extract an alpha channel from the 2D image asset, generate, with the alpha channel, a first spline comprising a first plurality of points positioned along a perimeter of a first portion of a shape silhouette of the real-world product or product package depicted in the 2D image asset, wherein the first spline is a curve of a shape silhouette of the alpha channel extracted from the 2D image asset” are taught by Halstead (Halstead, e.g. paragraphs 48-67, table 1, teaches that for each product in the database, there is a defined shape class, dimensional data indicating real-world measurements, and 2D front and back images having an alpha layer defining the foreground/background separation in the images. Further, Halstead, e.g. paragraphs 68-73, teaches that a spline is defined in step 114 based on the perimeter/silhouette defined by the points in the mask file, e.g. table 2, which are determined from the image alpha layer in step 112, i.e. the claimed spline comprising points along the silhouette of the product in the alpha channel of the image. More specifically, Halstead, paragraphs 70-71, table 2, indicates that the image mask file indicates the pixel coordinates of the outline boundary which are determined from the pixels in the alpha layer of the image, i.e. the perimeter of the object in the alpha channel, and further, paragraph 73, that step 114 fits line segments in pixels to the curve defined in the image mask file to define the outline curve, i.e. the claimed spline.)
The limitations (addressed out of order) “generate a first parametric model of the first portion of the real-world product or product package based on the first spline, the dimensional dataset, and the shape classification … generate a virtual 3D model of the real-world product or product package based on the first parametric model … and one or more attributes corresponding to the real-world product or product package; and render, via a graphical display or environment, the virtual 3D model representing the real-world product or product package in a virtual 3D space” are taught by Halstead (Halstead, e.g. paragraphs 50-68, 74-87, teaches that each shape class is associated with a template 3D mesh model, i.e. the claimed parametric model, which is combined with the claimed dimensional dataset and determined silhouette spline to generate a virtual 3D model having the real-world dimensions of the product and the corresponding 2D image portions textured thereon, with photorealism dependent upon the input and output parameters and input images provided for the product, e.g. paragraphs 80, 87. Finally, Halstead, e.g. paragraphs 88, teaches that one use of the generated product models is within a virtual 3D environment, i.e. the claimed rendering of the generated model in a virtual 3D space.)
The limitations “obtain a 2D image asset, where the 2D image asset is selected from the 2D imaging assets, and wherein the 2D image asset depicts the real-world product or product package, generate a first spline comprising a first plurality of points positioned along a perimeter of a first portion of a shape silhouette of the real-world product or product package depicted in the 2D image asset, wherein the shape silhouette is a shape silhouette of the alpha channel extracted from the 2D image asset, generate a second spline comprising a second plurality of points positioned along the perimeter of a second portion of the shape silhouette, generate a first parametric model of the first portion of the real-world product or product package based on the first spline, the dimensional dataset, and the shape classification, generate a second parametric model of [the] second portion of the real-world product or package based on the second spline, the dimensional dataset, and the shape classification, generate a virtual 3D model of the real-world product or product package based on the first parametric model and the second parametric model and one or more attributes corresponding to the real-world product or product package; and render, via a graphical display or environment, the virtual 3D model representing the real-world product or product package in a virtual 3D space” are taught by Halstead (Halstead, e.g. paragraphs 73, 77, 84, that at least some shape classes may have two perimeter splines generated from the image, where the second spline is similarly extruded into a second parametric mesh used to generate the resulting virtual 3D model of the product in the image(s), i.e. as specifically discussed in paragraph 84 the BATTERY shape class, in addition to the first perimeter common to all the shape classes discussed above, step 114 creates a clamshell extrusion shape based on the shape of the blister pack content, used to generate the corresponding parametric mesh in step 116 for use in generating the virtual 3D model output at step 118, e.g. paragraph 85.)
The limitations (addressed out of order) “determine a current pixel value associated with [a textured live package geometry associated with the virtual 3D model], and adjust the pixel value to make the … textured live package geometry have increased contrast” are not explicitly taught by Halstead (As noted above, Halstead, e.g. paragraphs 85-88, teaches that the resulting model 3D model has planar textures from the input photos, where the 3D model is rendered and displayed, i.e. the 3D model is associated with package textures applied to the geometry of the model, corresponding to the claimed textured live package geometry associated with the virtual 3D model. While Halstead, e.g. paragraphs 85, 87, teaches that the texture map data may be stretched or adjusted, with the goal being realistic results, Halstead does not explicitly teach adjusting the textures by adjusting the pixel values of the textures to be closer to a desired pixel value to increase the contrast of the adjusted portions of the texture.) However, this limitation is taught by Prada in view of Afrose (Prada, e.g. abstract, sections 1, 3-8, discloses a system for gradient-domain processing of a texture atlas of a 3D model allowing for a variety of purposes as described in section 7. Prada, section 7.1, figures 14 and 15, teaches that one purpose is allowing a user to specify regions for applying sharpening or smoothing, where the sharpening filter applied to the color texture pixels amplifies the color variation of the pixels, i.e. as shown in figure 14 right, figure 15(c), the contrast is increased by lightening lighter colored pixels and darkening darker colored pixels, e.g. the dancer’s eyebrows are darker and eyelids are lighter, increasing the contrast between them, and analogously the skin under the eyelid of the face in figure 15(c) has increased contrast in comparison to 15(a). Prada, section 7.1, paragraph 4, teaches that the adjustment can be applied using an interactive system allowing a user to specify which texture regions should be sharpened and which should be smoothed. Further, while Prada does not address sharpening the texture of a 3D model representing a real world product, per se, it is noted that one of ordinary skill in the art would have understood that sharpening the texture of a 3D model representing a real world product would potentially improve the quality of the resulting 3D model, e.g. Afrose, abstract, sections 1-5, discloses a system for sharpening the colors of a 3D mesh model, where sharpening improves image quality, e.g. section 1, paragraph 4, including for 3D models representing real world products, e.g. section 5, paragraph 1, figure 7(g) being a sharpened improvement over the figure 7(a) original model.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Halstead’s virtual 3D product model generation system to include Prada’s texture atlas local filtering technique and interface in order to allow a user of Halstead’s system to interactively adjust a 3D product model's texture by selectively sharpening and/or smoothing regions of the texture, in order to increase or decrease the contrast in the region as desired, i.e. as in Prada’s examples of figures 14 and 15, adjusting the contrast in a region can improve the appearance quality of the model. In the modified system, Halstead’s texture adjustments noted in paragraph 85 would include allowing a user to use Prada’s texture atlas local filtering interface to specify regions for increasing/decreasing contrast by sharpening or smoothing.
The limitations “identify text of a textured live package geometry associated with the virtual 3D model, determine a current pixel value associated with the text, and adjust the pixel value to make the text of the textured live package geometry have increased contrast” are implicitly taught by Halstead in view of Prada and Afrose (As discussed above, in the modified system, Halstead’s texture adjustments noted in paragraph 85 would include allowing a user to use Prada’s texture atlas local filtering interface to specify regions for increasing/decreasing contrast by sharpening or smoothing, where Prada, section 7.1, paragraph 4, teaches that the adjustment can be applied using an interactive system allowing a user to specify which texture regions should be sharpened and which should be smoothed. Further, Afrose, e.g. section 5, paragraph 1, figure 7(a),(g) shows that sharpening regions of a 3D product model texture comprising text improves the appearance quality of the 3D model. That is, the modified system allows a user to identify regions of the texture to have the contrast increased/decreased, and said user could choose to select regions of the texture comprising text for increasing the contrast thereof, corresponding to the claimed identification and adjustment of text of the textured live package geometry. In the interest of compact prosecution, Matsuda is cited for explicitly teaching automatic identification of regions for sharpening/smoothing by identifying regions of an image comprising text and regions which do not comprise text.) However, this limitation is taught by Matsuda (Matsuda, e.g. abstract, paragraphs 2, 32-87, describes a system for detecting text and pictorial regions of an image, e.g. paragraph 32, by analyzing the image content, e.g. paragraphs 34-40. Matsuda, e.g. paragraph 2, teaches that sharpening filters should be applied to text regions, whereas smoothing operations should be applied to pictorial regions, i.e. the detections of which regions comprise text and which regions comprise picture content allows selectively applying sharpening to text and smoothing to pictures.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Halstead’s virtual 3D product model generation system, including Prada’s texture atlas local filtering technique and interface, to include Matsuda’s image region detection technique for automatically identifying regions comprising text for sharpening filtering and regions comprising pictures for smoothing filtering as an alternative to Prada’s interactive region specification because Matsuda teaches that text regions should be sharpened and pictorial regions should be smoothed, e.g. paragraph 2, and one of ordinary skill in the art would recognize that the 3D model appearance quality can be improved by sharpening texture regions comprising text, as shown by Afrose, figure 7. In Halstead’s modified system, as an alternative to a user using Prada’s texture atlas local filtering interface to specify regions for increasing/decreasing contrast by sharpening or smoothing, Matsuda’s image region detection technique could be used to automatically determine which regions have text and should be sharpened to increase contrast, and which regions have pictures and should be smoothed to reduce contrast, i.e. as claimed identifying text of a texture the virtual 3D model, and adjusting the pixel values of the text to increase the contrast by brightening/darkening the pixel values.
The limitations “adjust the first spline to optimize a mapping of the first spline to the shape silhouette or to reduce the data size of the first spline by reducing or adjusting one or more of the first plurality of points positioned along the perimeter of the shape silhouette based on a rate of angle change of the first spline; wherein a number of the first plurality of points is increased at a portion of the first spline with a high rate of angle change” are partially taught by Halstead (Halstead, paragraphs 70-71, table 2, indicates that the image mask file indicates the pixel coordinates of the outline boundary of the alpha layer of the image, i.e. the perimeter of the object silhouette in the alpha channel, and further, paragraph 73, that step 114 fits line segments in pixels to the curve defined in the image mask file, including employing “an iterative process to optimize fit and reduce the number of line segments to a target goal”. That is, the iterative process adjusts the spline to achieve both claimed goals, optimizing the fit of the spline to the boundary defined in the alpha channel, and reducing the number of line segments making up the spline, and by extension the number of points and data size making up the spline. While Halstead teaches that the spline(s) may be iteratively refined in step 114, Halstead does not address adjusting the spline based on a rate of angle change of the spline by increasing the number of points at a portion with a high rate of angle change.) However, this limitation is taught by Livshitz (Livshitz, e.g. abstract, paragraphs 2-42, describes a technique for optimization of vector representation of a contour detected in a raster image, which detects portions of the contour having angles or curvatures greater than a threshold, e.g. paragraphs 9, 13-18, 40, 41, and optimizes the contour approximation by adding additional segmentation points to the contour when sharp angles or high curvature portions are detected, e.g. paragraphs 19, 26, 40-42, until the contour sufficiently approximates the original contour in the image. Further, Livshitz, e.g. paragraph 9, indicates that curvature is measured using a rate of angle change, i.e. high curvature is indicated by a change of direction along a length of the spline, with the example high curvature threshold being an angle between tangent vectors, which represent the angle at a point along the spline, that are 20 pixels apart being greater than 90 degrees, i.e. the rate of change of angle within the region exceeding 90 degrees per 20 pixels is indicative of high curvature.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Halstead’s virtual 3D product model generation system, including Prada’s texture atlas local filtering technique and interface, including Matsuda’s image region detection technique, to use Livshitz’ contour optimization technique for optimizing the outline curve from the image mask file in Halstead’s step 114 because Halstead does not describe details of optimizing the outline curve fit to the image mask file, and Livshitz describes details of a contour optimization technique for the same purpose of optimizing fit to an original image contour. In the modified system, Halstead’s step 114 would perform Livshitz’ technique, which, as noted above, adds additional segmentation points s, increasing the density of the contour, in areas exceeding angle or curvature thresholds, where the curvature threshold is measuring a rate of angle change as in Livshitz, paragraph 9.
The limitation “adjust the first spline to optimize a mapping of the first spline to the shape silhouette or to reduce a data size of the first spline by reducing or adjusting one or more of the first plurality of points positioned along the perimeter of the shape silhouette based on a rate of change of angle of the first spline … a number of the first plurality of points is reduced at a portion of the first spline with a comparatively lower rate of change” is implicitly taught by Halstead in view of Livshitz (As noted above, Halstead, paragraphs 70-71, table 2, describes an iterative process that adjusts the spline to achieve both claimed goals, optimizing the fit of the spline to the boundary defined in the alpha channel, and reducing the number of line segments making up the spline, and by extension the number of points and data size making up the spline. Also noted in the modification above, in the modified system, Halstead’s step 114 would perform Livshitz’ technique, which adds additional segmentation points s, increasing the density of the contour, in areas exceeding angle or curvature thresholds, where the curvature threshold is measuring a rate of angle change as in Livshitz, paragraph 9. That is, in the modified system, Halstead’s step 114 would still include “reduc[ing] the number of line segments to a target goal”, i.e. while Livshitz’ technique optimizes the fit in areas of high curvature or sharp angles, in areas which are not high curvature or sharp angles, Halstead’s function of reducing the number of line segments would be used to achieve the claimed reductions of data size of the spline, i.e. one of ordinary skill in the art would have found it implicit that in Halstead’s modified system the reduction of line segments/points would be performed in the remaining portions having a comparatively lower rate of change, i.e. the portions wherein Livshitz’ technique does not increase the density. In the interest of compact prosecution, because Halstead does not explicitly state how the number of line segments are reduced, Xiao is cited for explicitly teaching that a spline approximation of a raster image contour can be modified into a more compact representation by merging line segments which are sufficiently collinear into a single line segment, i.e. the claimed adjusting a spline to reduce a data size of the spline by reducing the number of points positioned along the perimeter of the shape silhouette at a portion of the spline with a low rate of angle change.) However this limitation is explicitly taught by Xiao (Xiao, e.g. abstract, sections 1-5, describes an adaptive split-and-merge method for compressing image contours, e.g. figures 7, 8, showing examples of compressed contours. Xiao, e.g. section 2.1, describes the initial contour being determined from neighboring pixels analogous to Halstead’s product image mask file contents, table 2. Xiao, e.g. section 3, figure 6, describes using a collinearity test to determine whether the line segments between exemplary end points E and F can be merged into a single line segment EF, resulting in a reduced number of points used to represent a sufficiently collinear portion of the contour.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement Halstead’s virtual 3D product model generation system, including Prada’s texture atlas local filtering technique and interface, including Matsuda’s image region detection technique, using Livshitz’ contour optimization technique, to use Xiao’s line segment merging technique in order to perform the line segment reduction portion of Halstead’s disclosed iterative spline adjustment process because Halstead does not explicitly state how the number of line segments are reduced by the process and Xiao describes an analogous image contour compression system with details of how the number of line segments may be reduced, i.e. by performing the collinearity test to determine when line segments between points E and F may be replaced by the line segment EF. In the modified system, Halstead’s step 114 would perform Livshitz’ technique, which adds additional segmentation points s, increasing the density of the contour, in areas exceeding angle or curvature thresholds, where the curvature threshold is measuring a rate of angle change as in Livshitz, paragraph 9, and Halstead’s step 114 would also perform Xiao’s technique, merging sets of collinear line segments into a single line segment, thereby reducing the number of points in portions of the spline having a comparatively lower rate of change, i.e. a set of line segments passing Xiao’s collinearity test will not have a high rate of curvature/angle change as measured by Livshitz.
Regarding claim 2, the limitations “wherein the shape classification comprises at least one of: a bottle classification, a symmetrical pump classification, a tube classification, a symmetrical bottle classification, a tottle classification, an asymmetrical bottle classification, a box classification, a pouch classification, a bag classification, a handled bottle classification, or a blister pack classification“ are taught by Halstead (Halstead, e.g. paragraphs 26-31, 51-66, 84, anticipates bag, box, pouch, tube, a cylindrical bottle, a cylindrical bottle with pump, and a blister battery clamshell corresponding to the claimed bag, box, pouch, tube, bottle, symmetrical bottle/pump, and blister pack classifications. Halstead additionally anticipates asymmetric shapes, e.g. paragraphs 52, 53, as well as that any other 3D shape class can be added as a template such that, while not required by the claim, the additional tottle, asymmetrical bottle, and handled bottle classifications would be obvious additional shape class templates, i.e. although evidence is not cited at this time, said shape classes were common before the effective filing date of the claimed invention.)
Regarding claim 9, the limitation “update the first spline, the second spline, or the parametric model by applying one or more refinements” is taught by Halstead (Halstead, e.g. paragraph 73, teaches that step 114 fits line segments in pixels to the curve defined in the image mask file, including employing “an iterative process to optimize fit and reduce the number of line segments to a target goal”, i.e. an initial spline is iteratively refined, and as discussed in the claim 1 rejection above, step 114 may include generating the second spline for the battery clamshell model which would also be subject to the iterative process.)
Regarding claim 12, the limitation “wherein the virtual 3D model is a polygonal model representation of the real-world product or product package” is taught by Halstead (Halstead, e.g. paragraph 80, indicates that the system can be controlled to produce output model geometry ranging from very high to very low polygon count, depending on the intended use of the output model.)
Regarding claim 16, the limitation “further comprising a server comprising at least one processor of the one or more processors, wherein at least a portion of 2D imaging assets and dimensional datasets are retrieved via a computing network” is implicitly taught by Halstead (Halstead, e.g. paragraphs 48, 49, indicates that data store 11 includes the 2D assets and dimensional datasets “using file path name, pointers/links or the like to the library files”, where data store 11 may be accessed using a server acting as the computing system 100, e.g. paragraphs 88-93, figure 8A. Halstead indicates that the library files are accessed via paths/pointers/links, discloses that the server includes network access, and may download software instructions through a network, as well as that the library files may be pre-existing assets stored on other systems for other purposes, e.g. paragraph 12. While Halstead does not explicitly indicate that data store 11 used by the server 60 in the figure 8A embodiment includes accessing/downloading the library files through a network, one of ordinary skill in the art would have found this to be an implicit possibility, i.e. one of ordinary skill in the art would have understood that paths or links can be network or internet addresses, and understood that downloading software instructions for an application is often combined with downloading related data/library files, and, finally, understood the advantages of accessing/downloading the pre-existing library files through a network connection, e.g. avoiding the transport and costs of physical media data transfer.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement Halstead’s virtual 3D product model generation system, including Prada’s texture atlas local filtering technique and interface, including Matsuda’s image region detection technique, using Livshitz’ contour optimization technique, using Xiao’s line segment merging technique, using Halstead’s server embodiment using a data store 11 that accesses/downloads library files through a computing network because, as discussed above, one of ordinary skill in the art would have found this to be an implicit possibility, in view of understanding that paths or links can be network or internet addresses, that downloading software instructions for an application is often combined with downloading related data/library files, and the advantages of accessing/downloading the pre-existing library files through a network connection, e.g. avoiding the transport and costs of physical media data transfer.
Regarding claim 22, the limitations “A three-dimensional (3D) image modeling method for automatically generate photorealistic, virtual 3D package and product models from two-dimensional (2D) imaging assets and dimensional data, the 3D image modeling method comprising: obtaining, by one or more processors, a shape classification defining a real-world product or product package to be virtually modeled in 3D space, obtain, by the one or more processors, a dimensional dataset defining product or package measurements of the real-world product or product package to be virtually modeled in 3D space, wherein the dimensional dataset includes additional sub-datasets for various portions of the product or product package measurements, generating, by the one or more processors, a spline, based on the 2D image asset, where the 2D image asset is selected from the 2D imaging assets, and wherein the 2D image asset depicts the real-world product or product package, the spline comprising a plurality of points positioned along a perimeter of a shape silhouette of the real-world product or product package depicted in the 2D image asset” are taught by Halstead (Halstead, e.g. abstract, paragraphs 10-15, 46-95, describes a system for automatically generating virtual 3D product models from 2D images, dimensional data, and shape class information for a set of products, where the system includes processors executing stored programs, e.g. paragraphs 47, 90-94. Halstead, e.g. paragraphs 48-67, table 1, teaches that for each product in the database, there is a defined shape class, dimensional data indicating real-world measurements, and 2D front and back images having an alpha layer defining the foreground/background separation in the images. Further, Halstead, e.g. paragraphs 68-73, teaches that a spline is defined in step 114 based on the perimeter/silhouette defined by the points in the mask file, e.g. table 2, which are determined from the image alpha layer in step 112, i.e. the claimed spline comprising points along the silhouette of the product in the image. Finally, Halstead, e.g. paragraph 84, teaches that for some shape classes, such as the exemplary Battery shape class, steps 114 and 116 generates and processes two components of the model, one for the cardboard insert and one for the clamshell, where step 114 uses the product dimensional data as part of sizing the 3D template model based on the spline, e.g. paragraphs 74, 75, wherein the clamshell component is further defined by cutout dimension data, e.g. paragraph 84, Table 5, i.e. the dimensional dataset includes additional sub-datasets for portions of the product/product package.)
The limitations (addressed out of order) “generating, by the one or more processors, a parametric model based on the spline, the dimensional dataset, and the shape classification, generating, by the one or more processors, a virtual 3D model of the real-world product or product package based on the parametric model and one or more attributes corresponding to the real-world product or product package; and rendering, by the one or more processors, via a graphical display or environment, the virtual 3D model representing the real-world product or product package in a virtual 3D space” are taught by Halstead (Halstead, e.g. paragraphs 50-68, 74-87, teaches that each shape class is associated with a template 3D mesh model, i.e. the claimed parametric model, which is combined with the claimed dimensional dataset and determined silhouette spline to generate a virtual 3D model having the real-world dimensions of the product and the corresponding 2D image portions textured thereon, with photorealism dependent upon the input and output parameters and input images provided for the product, e.g. paragraphs 80, 87. Finally, Halstead, e.g. paragraphs 88, teaches that one use of the generated product models is within a virtual 3D environment, i.e. the claimed rendering of the generated model in a virtual 3D space.)
The limitations “identify, by the one or more processors, a portion of the spline comprising at least one characteristic exceeding a predefined threshold, the at least one characteristic being selected from the group consisting of a rate of angle change, a curvature, and an angle, adjusting, by the one or more processors, the portion of the spline to optimize a mapping of the portion of the spline to the shape silhouette by increasing a point density of the plurality of points positioned along the perimeter of the shape silhouette at the portion of the spline based on the at least one characteristic exceeding the predefined threshold … adjusting the spline to reduce a data size of the spline by reducing or adjusting one or more of the plurality of points positioned along the perimeter of the shape silhouette based on a rate of angle change of the spline; wherein a number of the plurality of points is increased at a portion of the spline with a high rate of angle change” are partially taught by Halstead (Halstead, paragraphs 70-71, table 2, indicates that the image mask file indicates the pixel coordinates of the outline boundary of the alpha layer of the image, i.e. the perimeter of the object silhouette in the alpha channel, and further, paragraph 73, that step 114 fits line segments in pixels to the curve defined in the image mask file, including employing “an iterative process to optimize fit and reduce the number of line segments to a target goal”. That is, the iterative process adjusts the spline to achieve both claimed goals, optimizing the fit of the spline to the boundary defined in the alpha channel, and reducing the number of line segments making up the spline, and by extension the number of points and data size making up the spline. While Halstead teaches that the spline(s) may be iteratively refined in step 114, Halstead does not address increasing the point density of the spline in response to identifying a portion of the spline exceeding one of the claimed thresholds, or based on a rate of change, per se.) However, this limitation is taught by Livshitz (Livshitz, e.g. abstract, paragraphs 2-42, describes a technique for optimization of vector representation of a contour detected in a raster image, which detects portions of the contour having angles or curvatures greater than a threshold, e.g. paragraphs 9, 13-18, 40, 41, and optimizes the contour approximation by adding additional segmentation points to the contour when sharp angles or high curvature portions are detected, e.g. paragraphs 19, 26, 40-42, until the contour sufficiently approximates the original contour in the image. That is, by adding additional segmentation points s to the contour, the point density is increased in areas of the contour, i.e. spline, exceeding the angle or curvature thresholds, as claimed. Further, Livshitz, e.g. paragraph 9, indicates that curvature is measured using a rate of angle change, i.e. high curvature is indicated by a change of direction along a length of the spline, with the example high curvature threshold being an angle between tangent vectors, which represent the angle at a point along the spline, that are 20 pixels apart being greater than 90 degrees, i.e. the rate of change of angle within the region exceeding 90 degrees per 20 pixels is indicative of high curvature.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Halstead’s virtual 3D product model generation system to use Livshitz’ contour optimization technique for optimizing the outline curve from the image mask file in Halstead’s step 114 because Halstead does not describe details of optimizing the outline curve fit to the image mask file, and Livshitz describes details of a contour optimization technique for the same purpose of optimizing fit to an original image contour. In the modified system, Halstead’s step 114 would perform Livshitz’ technique, which, as noted above, adds additional segmentation points s, increasing the density of the contour, in areas exceeding angle or curvature thresholds, where the curvature threshold is measuring a rate of angle change as in Livshitz, paragraph 9. Further, in the modified system, Halstead’s step 114 would still include “reduc[ing] the number of line segments to a target goal”, i.e. while Livshitz’ technique optimizes the fit in areas of high curvature or sharp angles, in areas which are not high curvature or sharp angles, Halstead’s function of reducing the number of line segments would be used to achieve the claimed reductions of data size of the spline.
The limitation “adjusting the spline to reduce a data size of the spline by reducing or adjusting one or more of the plurality of points positioned along the perimeter of the shape silhouette based on a rate of change of angle of the spline … a number of the plurality of points is reduced at a portion of the spline with a comparatively lower rate of change” is implicitly taught by Halstead in view of Livshitz (As noted above, Halstead, paragraphs 70-71, table 2, describes an iterative process that adjusts the spline to achieve both claimed goals, optimizing the fit of the spline to the boundary defined in the alpha channel, and reducing the number of line segments making up the spline, and by extension the number of points and data size making up the spline. Also noted in the modification above, in the modified system, Halstead’s step 114 would perform Livshitz’ technique, which adds additional segmentation points s, increasing the density of the contour, in areas exceeding angle or curvature thresholds, where the curvature threshold is measuring a rate of angle change as in Livshitz, paragraph 9. It was further noted, in the modified system, Halstead’s step 114 would still include “reduc[ing] the number of line segments to a target goal”, i.e. while Livshitz’ technique optimizes the fit in areas of high curvature or sharp angles, in areas which are not high curvature or sharp angles, Halstead’s function of reducing the number of line segments would be used to achieve the claimed reductions of data size of the spline, i.e. one of ordinary skill in the art would have found it implicit that in Halstead’s modified system the reduction of line segments/points would be performed in the remaining portions having a comparatively lower rate of change, i.e. the portions wherein Livshitz’ technique does not increase the density. In the interest of compact prosecution, because Halstead does not explicitly state how the number of line segments are reduced, Xiao is cited for explicitly teaching that a spline approximation of a raster image contour can be modified into a more compact representation by merging line segments which are sufficiently collinear into a single line segment, i.e. the claimed adjusting a spline to reduce a data size of the spline by reducing the number of points positioned along the perimeter of the shape silhouette at a portion of the spline with a low rate of angle change.) However this limitation is explicitly taught by Xiao (Xiao, e.g. abstract, sections 1-5, describes an adaptive split-and-merge method for compressing image contours, e.g. figures 7, 8, showing examples of compressed contours. Xiao, e.g. section 2.1, describes the initial contour being determined from neighboring pixels analogous to Halstead’s product image mask file contents, table 2. Xiao, e.g. section 3, figure 6, describes using a collinearity test to determine whether the line segments between exemplary end points E and F can be merged into a single line segment EF, resulting in a reduced number of points used to represent a sufficiently collinear portion of the contour.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement Halstead’s virtual 3D product model generation system, using Livshitz’ contour optimization technique, to use Xiao’s line segment merging technique in order to perform the line segment reduction portion of Halstead’s disclosed iterative spline adjustment process because Halstead does not explicitly state how the number of line segments are reduced by the process and Xiao describes an analogous image contour compression system with details of how the number of line segments may be reduced, i.e. by performing the collinearity test to determine when line segments between points E and F may be replaced by the line segment EF. In the modified system, Halstead’s step 114 would perform Livshitz’ technique, which adds additional segmentation points s, increasing the density of the contour, in areas exceeding angle or curvature thresholds, where the curvature threshold is measuring a rate of angle change as in Livshitz, paragraph 9, and Halstead’s step 114 would also perform Xiao’s technique, merging sets of collinear line segments into a single line segment, thereby reducing the number of points in portions of the spline having a comparatively lower rate of change, i.e. a set of line segments passing Xiao’s collinearity test will not have a high rate of curvature/angle change as measured by Livshitz.
The limitations (addressed out of order) “determining, by the one or more processors, a current pixel value associated with [a textured live package geometry associated with the virtual 3D model], and adjusting, by the one or more processors, the pixel value to make the … textured live package geometry have increased contrast” are not explicitly taught by Halstead (As noted above, Halstead, e.g. paragraphs 85-88, teaches that the resulting model 3D model has planar textures from the input photos, where the 3D model is rendered and displayed, i.e. the 3D model is associated with package textures applied to the geometry of the model, corresponding to the claimed textured live package geometry associated with the virtual 3D model. While Halstead, e.g. paragraphs 85, 87, teaches that the texture map data may be stretched or adjusted, with the goal being realistic results, Halstead does not explicitly teach adjusting the textures by adjusting the pixel values of the textures to be closer to a desired pixel value to increase the contrast of the adjusted portions of the texture.) However, this limitation is taught by Prada in view of Afrose (Prada, e.g. abstract, sections 1, 3-8, discloses a system for gradient-domain processing of a texture atlas of a 3D model allowing for a variety of purposes as described in section 7. Prada, section 7.1, figures 14 and 15, teaches that one purpose is allowing a user to specify regions for applying sharpening or smoothing, where the sharpening filter applied to the color texture pixels amplifies the color variation of the pixels, i.e. as shown in figure 14 right, figure 15(c), the contrast is increased by lightening lighter colored pixels and darkening darker colored pixels, e.g. the danger’s eyebrows are darker and eyelids are lighter, increasing the contrast between them, and analogously the skin under the eyelid of the face in figure 15(c) has increased contrast in comparison to 15(a). Prada, section 7.1, paragraph 4, teaches that the adjustment can be applied using an interactive system allowing a user to specify which texture regions should be sharpened and which should be smoothed. Further, while Prada does not address sharpening the texture of a 3D model representing a real world product, per se, it is noted that one of ordinary skill in the art would have understood that sharpening the texture of a 3D model representing a real world product would potentially improve the quality of the resulting 3D model, e.g. Afrose, abstract, sections 1-5, discloses a system for sharpening the colors of a 3D mesh model, where sharpening improves image quality, e.g. section 1, paragraph 4, including for 3D models representing real world products, e.g. section 5, paragraph 1, figure 7(g) being a sharpened improvement over the figure 7(a) original model.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Halstead’s virtual 3D product model generation system, using Livshitz’ contour optimization technique, using Xiao’s line segment merging technique, to include Prada’s texture atlas local filtering technique and interface in order to allow a user of Halstead’s system to interactively adjust a 3D product model's texture by selectively sharpening and/or smoothing regions of the texture, in order to increase or decrease the contrast in the region as desired, i.e. as in Prada’s examples of figures 14 and 15, adjusting the contrast in a region can improve the appearance quality of the model. In the modified system, Halstead’s texture adjustments noted in paragraph 85 would include allowing a user to use Prada’s texture atlas local filtering interface to specify regions for increasing/decreasing contrast by sharpening or smoothing.
The limitations “identifying, by the one or more processors, text of a textured live package geometry associated with the virtual 3D model, determining, by the one or more processors, a current pixel value associated with the text, and adjusting, by the one or more processors, the pixel value to make the text of the textured live package geometry have increased contrast” are implicitly taught by Halstead in view of Prada and Afrose (As discussed above, in the modified system, Halstead’s texture adjustments noted in paragraph 85 would include allowing a user to use Prada’s texture atlas local filtering interface to specify regions for increasing/decreasing contrast by sharpening or smoothing, where Prada, section 7.1, paragraph 4, teaches that the adjustment can be applied using an interactive system allowing a user to specify which texture regions should be sharpened and which should be smoothed. Further, Afrose, e.g. section 5, paragraph 1, figure 7(a),(g) shows that sharpening regions of a 3D product model texture comprising text improves the appearance quality of the 3D model. That is, the modified system allows a user to identify regions of the texture to have the contrast increased/decreased, and said user could choose to select regions of the texture comprising text for increasing the contrast thereof, corresponding to the claimed identification and adjustment of text of the textured live package geometry. In the interest of compact prosecution, Matsuda is cited for explicitly teaching automatic identification of regions for sharpening/smoothing by identifying regions of an image comprising text and regions which do not comprise text.) However, this limitation is taught by Matsuda (Matsuda, e.g. abstract, paragraphs 2, 32-87, describes a system for detecting text and pictorial regions of an image, e.g. paragraph 32, by analyzing the image content, e.g. paragraphs 34-40. Matsuda, e.g. paragraph 2, teaches that sharpening filters should be applied to text regions, whereas smoothing operations should be applied to pictorial regions, i.e. the detections of which regions comprise text and which regions comprise picture content allows selectively applying sharpening to text and smoothing to pictures.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Halstead’s virtual 3D product model generation system, using Livshitz’ contour optimization technique, using Xiao’s line segment merging technique, including Prada’s texture atlas local filtering technique and interface, to include Matsuda’s image region detection technique for automatically identifying regions comprising text for sharpening filtering and regions comprising pictures for smoothing filtering as an alternative to Prada’s interactive region specification because Matsuda teaches that text regions should be sharpened and pictorial regions should be smoothed, e.g. paragraph 2, and one of ordinary skill in the art would recognize that the 3D model appearance quality can be improved by sharpening texture regions comprising text, as shown by Afrose, figure 7. In Halstead’s modified system, as an alternative to a user using Prada’s texture atlas local filtering interface to specify regions for increasing/decreasing contrast by sharpening or smoothing, Matsuda’s image region detection technique could be used to automatically determine which regions have text and should be sharpened to increase contrast, and which regions have pictures and should be smoothed to reduce contrast, i.e. as claimed identifying text of a texture the virtual 3D model, and adjusting the pixel values of the text to increase the contrast by brightening/darkening the pixel values.
Regarding claim 23, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 1 above. With respect to the limitations requiring that the first and second parametric models based on the splines include “live package geometry that is still editable”, it is noted that the broadest reasonable interpretation of this limitation merely requires that the data can be edited, i.e. merely by being stored in re-writable memory, e.g. system RAM, Halstead’s first, and second in the case of a shape class with 2 curves like the battery shape class, outline curve(s) determined in step 114 are “still editable”, i.e. the data is stored in a re-writable memory allowing edits to be performed. Furthermore, Halstead’s first and second outline curve(s) determined in step 114, as discussed in the claim 1 rejection above, may be iteratively processed to optimize the fit and/or reduce the number of line segments, e.g. paragraph 73, corresponding to the claimed “live package geometry that is still editable”, i.e. an initial spline based on the alpha channel outline may be iteratively processed to a spline having an improved fit or reduced number of segments, corresponding to live geometry that is still editable.
Regarding claim 25, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 23 above. That is, as taught by Livshitz, paragraphs 9, 13-19, 26 40-42, any portions of the contour which do not have angles or curvatures greater than the thresholds would not have additional segmentation points added, and by extension would remain at a lower relative density compared to the portions to which segmentation points are added, i.e. regions which were already smooth, such as the top portion of the contour in figures 1 and 2, would not have points of high curvature or sharp angles, and therefore would not have additional segmentation points added, in comparison to the lower portion of the contour of figures 1 and 2, having points of high curvature and sharp angles, resulting in additional segmentation points being added, i.e. the claimed greater point density.
Regarding claim 26, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 1 above.
Claim 24 is rejected under 35 U.S.C. 102(a)(2) as being anticipated by U.S. Patent Application Publication 2011/0254840 A1 (hereinafter Halstead) in view of “Gradient-Domain Processing within a Texture Atlas” in view of Fabian Prada, et al. (hereinafter Prada) in view of “Mesh color sharpening” by Zinat Afrose, et al. (hereinafter Afrose) in view of U.S. Patent Application 2008/0056573 A1 (hereinafter Matsuda) in view of U.S. Patent Application Publication 2019/0261024 A1 (hereinafter Livshitz) in view of “An adaptive split-and-merge method for binary image contour data compression” by Yi Xiao, et al. (hereinafter Xiao) as applied to claim 1 above, and further in view of U.S. Patent Application Publication 2017/0132836 (hereinafter Iverson).
Regarding claim 24, the limitations “generate the first parametric model of the first portion of the real-world product or product package based on the first spline, the dimensional dataset, and the shape classification by performing a first manipulation on the first spline, the first manipulation comprising one selected from a revolve, an extrusion, and a crimp; and generate the second parametric model of the second portion of the real-world product or product package based on the second spline, the dimensional dataset, and the shape classification by performing a second manipulation on the second spline, the second manipulation comprising one selected from a revolve, an extrusion, and a crimp” are taught by Halstead as evidenced by Iverson (It is noted that Applicant’s disclosure does not provide any special definition of the term “extrusion” (or “crimp”, although this manipulation is not being addressed with the prior art), and in consideration of MPEP 2173.01 I, Iverson is cited for describing the result of extrusion manipulations performed on contour/splines obtained from 2D images to form 3D models, i.e. Iverson, e.g. abstract, paragraphs 20-55, discloses a system for extruding 2D contours obtained from 2D images, which involves extending the contours in the third dimension to form an extruded shape based on the 2D contour, e.g. figure 1, paragraphs 22-24, 33, providing evidence that one of ordinary skill in the art would understand that an extrusion operation comprises identifying a 2D contour and extending the shape using height information to make a 3D object. Halstead, as discussed in the claim 1 rejection above, e.g. paragraphs 50-68, 74-87, teaches that each shape class is associated with a template 3D mesh model, i.e. the claimed parametric model, which is combined with the claimed dimensional dataset and determined silhouette spline to generate a virtual 3D model having the real-world dimensions of the product. Halstead, e.g. paragraph 74 indicates that a depth estimate is used to determine depth information for each object, analogous to Iverson’s discussion of generating a heightmap for extrusion as in paragraphs 22-24, 33, followed by paragraph 75 modifying the dimensions of the 3D mesh outline to have the corresponding depth profile, making an extruded object shape based on the extracted contour/silhouette, i.e. as in paragraph 77, with classes Box, Battery, and Bag, this amounts to extruding a 6 sided object to match the contour/silhouette and depth profile. Furthermore, Halstead, e.g. paragraph 84, confirms that these manipulations are considered extrusions by those of ordinary skill in the art, i.e. “For the BATTERY shape class, two components are generated in Steps 114 and 116. In addition to the cardboard insert processing following the above described, Step 114 creates a clamshell extrusion shape file 23.”, i.e. the second parametric model is generated using an extrusion manipulation.)
Claims 10 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication 2011/0254840 A1 (hereinafter Halstead) in view of “Gradient-Domain Processing within a Texture Atlas” in view of Fabian Prada, et al. (hereinafter Prada) in view of “Mesh color sharpening” by Zinat Afrose, et al. (hereinafter Afrose) in view of U.S. Patent Application 2008/0056573 A1 (hereinafter Matsuda) in view of U.S. Patent Application Publication 2019/0261024 A1 (hereinafter Livshitz) in view of “An adaptive split-and-merge method for binary image contour data compression” by Yi Xiao, et al. (hereinafter Xiao) as applied to claim 9 above, and further in view of U.S. Patent Application Publication 2009/0044136 A1 (hereinafter Flider).
Regarding claim 10, the limitations “wherein one or more points of the first spline, the second spline, or parametric model are configured to be selected or dragged, and wherein the one or more imaging refinements comprise receiving a selection or drag command to adjust or reduce the one or more points of the first spline, the second spline, or parametric model” are not explicitly taught by Halstead (Halstead, e.g. paragraph 14, indicates that a benefit of the system is allowing the model processing to be rerun to accommodate changes and corrections, i.e. analogous to paragraph 80, and 87, a first output model for a given product may be deemed unsatisfactory for an intended application, causing the user to change or refine the model generation parameters or input data to achieve the desired result. While Halstead, e.g. paragraph 73, also teaches that the spline(s) may be iteratively refined in step 114, Halstead does not address receiving a selection or drag command to adjust or reduce points of the spline.) However, this limitation is suggested by Flider (Flider, e.g. abstract, paragraphs 46-131, describes a user interface for a system which performs foreground/background image masking for editing slides in a presentation. Flider’s foreground/background masks are defined in part by splines arranged along the perimeter of intended foreground object, e.g. paragraphs 122, 124, 127, 130, 131, figure 27, analogous to Halstead’s outline curve spline. Further, Flider, e.g. paragraph 131, teaches that the user an be provided with the ability to refine the selected region by selecting and dragging one or more of the vertices and/or deleting vertices, followed by indicating satisfaction with the resulting region/spline such that the system can continue to the next processing step of performing the extraction.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Halstead’s virtual 3D product model generation system, including Prada’s texture atlas local filtering technique and interface, including Matsuda’s image region detection technique, using Livshitz’ contour optimization technique, using Xiao’s line segment merging technique, to include Flider’s foreground object spline refinement interface in order to allow the user to refine the outline curve spline(s) determined for a product model in step 114 to achieve a desired resulting virtual 3D product model. That is, Halstead, paragraphs 14, 80, 87, indicates that the modeling operation can be performed repeatedly using different parameters and templates in order to make corrections to the output model until it is satisfactory for the intended application, and in instances where the outline curve spline(s) derived using the image alpha layer in step 112 does not produce a satisfactory result, e.g. if the pre-existing assets mentioned in Halstead paragraph 12 included erroneous or inaccurate alpha layer data, a correction would be required, and Flider’s user interface is directed to assisting a user in refining a foreground object selection spline to a desired shape. In the modified system, when the resulting virtual 3D product model is not satisfactory to the user, in addition to Halstead’s disclosed change in template polygon counts, or higher resolution image, the user would be able to use Flider’s user interface to refine the outline curve spline(s), determined in step 114 until the user is satisfied, indicated by a command to the system to perform the remaining steps in the modeling process.
Regarding claim 11, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 10 above, i.e. the refinement of the outline curve spline will adjust which portions of the image are applied to the 3D model as a texture, e.g. Halstead, paragraphs 79, 85, 87 indicating that the photograph pixels are texture mapped onto the model surface, which will be affected by the set of image pixels included or not included due said refinement, corresponding to the claimed adjustment/application of one or more image features. Further, in the modified system, after the user is satisfied with the refined outline curve spline, the user commands the system to perform the remaining steps in the modeling process, i.e. an adjustment/application command to cause an update to the generated 3D product model.
Claims 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication 2011/0254840 A1 (hereinafter Halstead) in view of “Gradient-Domain Processing within a Texture Atlas” in view of Fabian Prada, et al. (hereinafter Prada) in view of “Mesh color sharpening” by Zinat Afrose, et al. (hereinafter Afrose) in view of U.S. Patent Application 2008/0056573 A1 (hereinafter Matsuda) in view of U.S. Patent Application Publication 2019/0261024 A1 (hereinafter Livshitz) in view of “An adaptive split-and-merge method for binary image contour data compression” by Yi Xiao, et al. (hereinafter Xiao) as applied to claims 1 and 12 above, and further in view of U.S. Patent Application Publication 2013/0218714 A1 (hereinafter Watkins).
Regarding claim 13, the limitation “initiate creation of at least a portion of the real-world product or product package based on the virtual 3D model” is not explicitly taught by Halstead (Halstead suggests the models may be used in different virtual store applications, e.g. paragraphs 3, 80, 87, 88, but does not teach creation of a real-world version of the product based on the generated virtual 3D model.) However, this limitation is taught by Watkins (Watkins, e.g. abstract, paragraphs 22-272, describes a system for a virtual jewelry store, wherein a user can customize 3D virtual jewelry product models, e.g. paragraphs 210-272. Watkins, paragraphs 227-272, suggests several preview display alternatives to viewing rendered images of the customized product models, including generating a physical prototype of a customized item, e.g. paragraphs 233, 234, using a 3D printer to print a real-world representation of the customized product model.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Halstead’s virtual 3D product model generation system, including Prada’s texture atlas local filtering technique and interface, including Matsuda’s image region detection technique, using Livshitz’ contour optimization technique, using Xiao’s line segment merging technique, to support Watkins’ alternative preview display techniques, including 3D printing a prototype of the virtual 3D product model(s), in order to provide the user with additional means to inform a purchasing decision.
Regarding claim 14, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 13 above, i.e. Watkins’ discloses using a 3D printer for the claimed creation.
Regarding claim 15, the limitation “wherein the polygonal model is rendered in the virtual 3D space as part of a mixed reality environment” is not explicitly taught by Halstead (Halstead suggests the models may be used in different virtual store applications, e.g. paragraphs 3, 80, 87, 88, but does not teach that the virtual application environments are mixed reality.) However, this limitation is taught by Watkins (Watkins, e.g. abstract, paragraphs 22-272, describes a system for a virtual jewelry store, wherein a user can customize 3D virtual jewelry product models, e.g. paragraphs 210-272. Watkins, paragraphs 227-272, suggests several preview display alternatives to viewing rendered images of the customized product models, including using augmented reality to preview renderings of the customized product model, e.g. paragraphs 269-272, such as the user wearing the product as in the case of jewelry.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Halstead’s virtual 3D product model generation system, including Prada’s texture atlas local filtering technique and interface, including Matsuda’s image region detection technique, using Livshitz’ contour optimization technique, using Xiao’s line segment merging technique, to support Watkins’ alternative preview display techniques, including augmented reality preview rendering of the virtual 3D product model(s), in order to provide the user with a virtual representation of the product in the user’s real environment. That is, one of ordinary skill in the art, being generally familiar with the advantages of augmented reality, would understand that analogous to Watkins’ selected jewelry on a customer’s hand, augmented reality representation of Halstead’s virtual product models allows the user to easily observe the relative size of the virtual product in relation to real objects, such as the user’s hand, e.g. visualizing the size of Halstead’s exemplary detergent bottle in figure 1B as if the user were actually holding it, or in relation to other objects in the user’s environment such as the user’s kitchen sink.
Response to Arguments
Applicant's arguments filed 1/28/26 have been fully considered but they are not persuasive.
Applicant asserts that Halstead teaches away from the teachings of Livshitz. It is noted that Applicant previously presented this argument, i.e. Applicant’s 3/20/25 remarks, pages 8-9, presents substantially the same argument that Halstead teaches away from the proposed modification due to intending to reduce the number of line segments. As in the previous response of the 4/1/25 Office Action, pages 22-23, herein incorporated by reference, Applicant’s remarks still fail to address the fact that Halstead teaches optimizing the fit in addition to reducing the number of line segments, contradicting Applicant’s basis for Halstead supposedly teaching away from the proposed modification. Therefore this argument is still not persuasive.
With respect to Applicant’s argument that the references do not teach the amended limitation of reducing the number of points in regions of lower rate of angle change, Applicant’s remarks are simply conclusory, i.e. Applicant’s remarks on page 3 simply declare that there is no teaching or suggestion to reduce the number of points in regions of lower rate of angle change. It is noted this was previously addressed in the 10/1/25 Office Action, page 18, noting that in the modified system, Halstead’s step 114 would still include “reduc[ing] the number of line segments to a target goal”, i.e. while Livshitz’ technique optimizes the fit in areas of high curvature or sharp angles, in areas which are not high curvature or sharp angles, Halstead’s function of reducing the number of line segments would be used to achieve the claimed reductions of data size of the spline. Applicant’s remarks neither acknowledge this mapping, or offer any rationale contradicting the analysis Therefore, this argument cannot be considered persuasive.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT BADER whose telephone number is (571)270-3335. The examiner can normally be reached 11-7 m-f.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at 571-272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROBERT BADER/Primary Examiner, Art Unit 2611