DETAILED ACTION
Notice of AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Obligation Under 37 CFR 1.56 – Joint Inventors
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Election/Restrictions
Applicant’s election without traverse of Species I (claims 1-26) in the reply filed on November 21, 2025 is acknowledged.
Status of Claims
Claims 1-30 are pending in this application. Non-Elected claims 27-30 are withdrawn from consideration. Thus, claims 1-26 remain under consideration in this application, with claims 1, 25 and 26 being independent.
Foreign Priority
Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file.
Drawings
The drawings were received on November 16, 2023. These drawings are acceptable.
Claim Rejections – 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
Determining the scope and contents of the prior art;
Ascertaining the differences between the prior art and the claims at issue;
Resolving the level of ordinary skill in the pertinent art; and
Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 6-8, 22 and 24-26 are rejected under 35 U.S.C. 103 as being unpatentable over FRAKES et al. (US 2020/0027155, hereinafter “FRAKES”) in view of GOLDBERG et al. (US 2011/0245633, hereinafter “GOLDBERG”).
Regarding claim 26, FRAKES discloses a computing device (e.g., ¶ [0067]: “computing device 101 for visualizing a garment fit”) comprising:
one or more processors (e.g., ¶ [0067]: “computing device 101 can include at least one of: processor(s) 110, memory 120, body pre-processor 130, garment pre-processor 140, finite element solver 150, soft-body dynamics solver 160, user interface component(s) 170, and communication component(s) 180.” ¶ [0068]: “The one or more processors 110 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, a FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected.”); and
memory (e.g., ¶ [0069]: “The memory 120 can store data 121 and instructions 122 which are executed by the processor 110 to cause computing device 101 to perform operations.”) storing instructions thereon, the instructions when executed by the one or more processors (e.g., ¶ [0069]: “The memory 120 can store data 121 and instructions 122 which are executed by the processor 110 to cause computing device 101 to perform operations.” ¶ [0081]: “The method(s) can be implemented by one or more computing devices, such as the computing device 101 depicted in FIG. 1.”) cause the one or more processors to:
receive pattern information (e.g., ¶ [0034]: “obtain data indicative of a shape of a garment panel (e.g., from a garment model file)”) indicating configurations of each of patterns of a garment (e.g., ¶ [0034]: “shape of a garment panel”) (¶ [0011]: “obtaining, by one or more computing devices, garment data descriptive of a garment and body data descriptive of a body.” ¶ [0028]: “the one or more garment panels can be stored as a two-dimensional (2D) cutting pattern, a 2D mesh file, a 3D mesh file, or any other suitable format.” ¶ [0029]: “The garment panel(s) can correspond to cutting patterns for the garment. The garment panel(s) can be represented within the garment model file, for example, as blocks or pieces. Each of the garment panel(s) can also have or otherwise be associated with one or more garment feature(s). The garment feature(s) can include, for example, a dart, pocket, placket, j-curve, yolk, dart, embroidery, buttons, etc. The garment feature(s) can be represented within the garment model file, for example, as lines or patterns in one or more layers of each block or piece.” ¶ [0034]: “The garment pre-processor can obtain data indicative of a shape of a garment panel (e.g., from a garment model file), and input the data into the pattern recognition algorithm.” ¶ [0073]: “obtain the one or more garment panels from the memory 120.” ¶ [0110]: “At (601), the method 600 can include obtaining data indicative of a garment model. The data indicative of the garment model can include, for example, garment panels, garment features, and garment material properties. As an example, memory 120 can include one or more garment panels and a garment materials database. The one or more garment panels can include one or more garment features associated with the garment panels. The garment materials database can include one or more garment material models, one or more textile mechanical properties corresponding to each garment material model, and one or more textures corresponding to each garment material model. The garment pre-processor 140 can obtain the one or more garment panels, and one or more associated garment features from the memory 120, and obtain garment material properties corresponding to the garment panels from the garment materials database in the memory 120.” ¶ [0116]: “The garment panels 910 can be obtained, for example, by the garment pre-processor 140 from memory 120.” ),
apply the pattern information (e.g., ¶ [0034]: “obtain data indicative of a shape of a garment panel (e.g., from a garment model file), and input the data into the pattern recognition algorithm.”) to a neural network model (e.g., ¶ [0034]: “The pattern recognition algorithm can include, for example, a classification tree, a machine-learned pattern recognition model,” NOTE: One of ordinary skill in the art would understand that a neural network model is merely one well-known type of “machine-learned pattern recognition model”. Thus, as a simple design choice for implementing the machine-learned pattern recognition model disclosed in FRAKES, it would have been obvious to one of ordinary skill in the art at the time of the invention to have used a neural network model as one well-known implementation for a machine-learned pattern recognition model. For instance, GOLDBERG et al. (US 2011/0245633) clearly teaches that pattern recognition algorithms include machine learning algorithms such as neural networks (See ¶ [0014] of GOLDBERG: “pattern recognition algorithms include machine learning algorithms such as Dynamic Baysian Networks, neural networks, conditional random fields, hidden Markov models, Kalman filters, fuzzy logic, kernel estimations, k-nearest neighbor, learning vector quantization, Gaussian models, and/or radial basis function.”).) to extract features from the configurations of each of the patterns (e.g., ¶ [0034]: “recognize the garment panel shape and/or the one or more geometric feature(s) of the garment panel,” ¶ [0034]: “The pattern recognition algorithm can output data indicative of a body landmark classification of the garment panel and/or data indicative of one or more geometric feature(s) of each garment panel.”) (¶ [0032]: “the garment pre-processor can parse a garment model file (e.g., CAD file) to identify one or more garment panel(s), and classify each of the identified garment panel(s). The garment pre-processor can classify the identified garment panel(s) according to one or more body landmark(s). Each of the body landmark(s) can be associated with a specific region or location on a 3D body model. A body landmark can be associated with, for example, a front, back, chest, abdomen, seat, left/right arm, left/right leg, waist, neck, shoulders, left/right elbow, left/right knee, left/right wrist, etc. on a 3D body model.” ¶ [0032]: “the garment pre-processor can classify the garment panel(s) based on a garment type and/or a garment panel shape.” ¶ [0034]: “the garment pre-processor can use a pattern recognition algorithm to classify each panel in the garment model file. For example, the garment pre-processor can use a pattern recognition algorithm to determine a body landmark classification for a garment panel. The garment pre-processor can obtain data indicative of a shape of a garment panel (e.g., from a garment model file), and input the data into the pattern recognition algorithm. The pattern recognition algorithm can output data indicative of a body landmark classification of the garment panel and/or data indicative of one or more geometric feature(s) of each garment panel. The pattern recognition algorithm can include, for example, a classification tree, a machine-learned pattern recognition model, and/or a probability/score calculation approach to help recognize the garment panel shape and/or the one or more geometric feature(s) of the garment panel, and to determine the body landmark classification of the garment panel. The machine learned approach and the probability/score calculation approach can use any unique identifying features of the garment panel to match it to a garment panel from a garment panel database.” ¶ [0034]: “for each garment panel in the garment model file” ¶ [0035]: “subsequent to classifying the garment panel, the garment pre-processor can identify one or more garment feature(s) associated with a garment panel in a garment model file. For example, the garment pre-processor can use a probability lookup table, machine-learned feature recognition model, and/or other pattern recognition algorithms to identify the garment feature(s) based on lines or patterns in one or more layers of one or more blocks or pieces in the garment model file. The pattern recognition algorithm for identifying the one or more garment feature(s) can be similar to the pattern recognition algorithm to classify each panel in the garment model file described above.” ¶ [0073]: “The garment pre-processor 140 can obtain the one or more garment panels from the memory 120. The garment pre-processor 140 can determine one or more stitch lines/curves and one or more respective attachment points for each of the one or more garment panels.”),
predict arrangement points (e.g., ¶ [0034]: “determine the body landmark classification of the garment panel”), by the neural network model (e.g., ¶ [0034]: “machine-learned pattern recognition model”), for placing the patterns relative to a three-dimensional (3D) avatar on which the garment is placed by processing the extracted features (e.g., ¶ [0034]: “recognize the garment panel shape and/or the one or more geometric feature(s) of the garment panel, and to determine the body landmark classification of the garment panel.”) (¶ [0032]: “the garment pre-processor can parse a garment model file (e.g., CAD file) to identify one or more garment panel(s), and classify each of the identified garment panel(s). The garment pre-processor can classify the identified garment panel(s) according to one or more body landmark(s). Each of the body landmark(s) can be associated with a specific region or location on a 3D body model. A body landmark can be associated with, for example, a front, back, chest, abdomen, seat, left/right arm, left/right leg, waist, neck, shoulders, left/right elbow, left/right knee, left/right wrist, etc. on a 3D body model.” ¶ [0034]: “the garment pre-processor can use a pattern recognition algorithm to classify each panel in the garment model file. For example, the garment pre-processor can use a pattern recognition algorithm to determine a body landmark classification for a garment panel. The garment pre-processor can obtain data indicative of a shape of a garment panel (e.g., from a garment model file), and input the data into the pattern recognition algorithm. The pattern recognition algorithm can output data indicative of a body landmark classification of the garment panel and/or data indicative of one or more geometric feature(s) of each garment panel. The pattern recognition algorithm can include, for example, a classification tree, a machine-learned pattern recognition model, and/or a probability/score calculation approach to help recognize the garment panel shape and/or the one or more geometric feature(s) of the garment panel, and to determine the body landmark classification of the garment panel. The machine learned approach and the probability/score calculation approach can use any unique identifying features of the garment panel to match it to a garment panel from a garment panel database.” ¶ [0112]: “At (603), the method 600 can include positioning the garment panels onto the body model. For example, the garment pre-processor 140 can classify each of the garment panels according to one or more body landmarks, and match the garment panels with the one or more predetermined body landmarks associated with the body model, in order to position the garment panels on the body model. In addition, the garment pre-processor 140 can use a rules based approach to fine-tune a location at which to position the garment panels on the body model.” ¶ [0116]: “The garment pre-processor 140 can classify the garment panels 910 according to body landmarks. In particular, the garment panels 910 can be classified according to the body landmarks: left arm, right arm, front left side torso, front right side torso, back left side, and back right side.” ),
arrange at least a subset of the patterns at the predicted arrangement points (¶ [0036]: “the garment pre-processor can position the identified garment panel(s) on a 3D body model. The garment pre-processor can position the identified garment panel(s) based on the body landmark classification of the garment panel(s). The garment pre-processor can receive a 3D body model that includes one or more predetermined body landmark(s) associated with a predetermined region or location on the 3D body model.” ¶ [0036]: “The garment pre-processor can position and hold the garment panel(s) at a specific region or location on the 3D body model by matching the body landmark classification of the garment panel(s) with the predetermined body landmark(s) associated with the 3D body model, so that the garment panel(s) can be stitched to create a 3D garment model representing the garment. For example, if a garment panel is for a chest body landmark, then the garment pre-processor can position the garment panel at a region or location on a 3D body model that is associated with the chest body landmark (e.g., the chest region on the 3D body model).” ¶ [0112]: “At (603), the method 600 can include positioning the garment panels onto the body model. For example, the garment pre-processor 140 can classify each of the garment panels according to one or more body landmarks, and match the garment panels with the one or more predetermined body landmarks associated with the body model, in order to position the garment panels on the body model. In addition, the garment pre-processor 140 can use a rules based approach to fine-tune a location at which to position the garment panels on the body model.” ¶ [0116]: “The garment pre-processor 140 can position the garment panels 910 by matching the garment panels with the one or more predetermined body landmarks associated with the body model 901, based on the classification of the garment panels 910.”),
assemble the patterns from the arrangement points into the garment (e.g., ¶ [0036]: “position and hold the garment panel(s) at a specific region or location on the 3D body model by matching the body landmark classification of the garment panel(s) with the predetermined body landmark(s) associated with the 3D body model, so that the garment panel(s) can be stitched to create a 3D garment model representing the garment.”) placed on the 3D avatar (e.g., ¶ [0036]: “position and hold the garment panel(s) at a specific region or location on the 3D body model “ ¶ [0027]: “three dimensional model (3D) of a body” ¶ [0071]: “a body model representing a target body”) (¶ [0027]: “computational models representing garment cutting patterns can be stitched directly on a three dimensional model (3D) of a body to create a 3D garment model representing the garment. Stitching can be performed by stitching different edges of a garment cutting pattern according to a specific sequence.” ¶ [0028]: “the garment pre-processor can generate a model of a garment from one or more garment panels associated with such garment. For example, the one or more garment panels can be stored as a two-dimensional (2D) cutting pattern, a 2D mesh file, a 3D mesh file, or any other suitable format. One or more stitch lines/curves and one or more respective attachment points can be determined for each of the one or more garment panels. A finite element solver can be used to perform stitching by connecting the one or more stitch lines/curves in 3D along the one or more respective attachment points to create a 3D stitched garment. The finite element solver used to perform stitching can be the same as or different from the finite element solver used to perform garment deformation. The finite element solver can perform the stitching of the garment with or without a 3D body model between garment components.” ¶ [0036]: “The garment pre-processor can position and hold the garment panel(s) at a specific region or location on the 3D body model by matching the body landmark classification of the garment panel(s) with the predetermined body landmark(s) associated with the 3D body model, so that the garment panel(s) can be stitched to create a 3D garment model representing the garment. For example, if a garment panel is for a chest body landmark, then the garment pre-processor can position the garment panel at a region or location on a 3D body model that is associated with the chest body landmark (e.g., the chest region on the 3D body model).” ¶ [0073]: “the garment pre-processor 140 can prepare the garment model by stitching one or more garment panels along one or more stitch lines or curves. For example, the garment pre-processor 140 can generate a garment model of a garment based at least in part on one or more garment panels associated with the garment. The garment pre-processor 140 can obtain the one or more garment panels from the memory 120. The garment pre-processor 140 can determine one or more stitch lines/curves and one or more respective attachment points for each of the one or more garment panels. In some implementations, a finite element solver can be used to perform stitching by connecting the one or more stitch lines/curves in 3D along the one or more respective attachment points to create a 3D stitched garment.” ¶ [0073]: “the garment pre-processor 140 can perform the stitching with a body model, so that a garment stitching approach can stitch garment cutting patterns directly on the body model.” ¶ [0113]: “At (604), the method 600 can include stitching the garment panels to prepare the garment model. For example, the garment pre-processor 140 can analyze the garment panels, and determine a stitching sequence based on a size, position, material properties, or garment features associated with the garment panels. The stitching sequence can include, for example, a stitching order and/or different stitching techniques for stitching the garment panels along one or more stitch lines/curves and one or more attachment points. The garment pre-processor 140 can stitch each of the garment panels along the one or more stitch lines/curves and one or more respective attachment points.”), and
perform simulation of the garment on the 3D avatar (¶ [0011]: “simulating, by the one or more computing devices, a garment deformation of the garment due to contact from the body.” ¶ [0012]: “simulate deformation of the garment on the body.” ¶ [0013]: “simulate deformation of the garment due to contact with the body model” ¶ [0026]: “a modified finite element solver can be used to simulate deformation of the garment due to contact from the body while a soft-body solver can be used to simulate deformation of the body due to contact from the garment.” ¶ [0074]: “The finite element solver 150 can be configured to simulate deformation of a garment on a body.” ¶ [0086]: “At (204), the method 200 can include simulating garment deformation.” ¶ [0087]: “At (205), the method 200 can include simulating body deformation.” ¶ [0088]: “At (206), the method 200 can include visualizing garment fit and appearance.” ¶ [0116]: “The garment pre-processor 140 can determine a stitching sequence 911, and perform stitching by connecting the garment panels 910 according to the stitching sequence 911 in 3D. The garment pre-processor 140 can perform the stitching of the garment with or without the body model 901 between garment panels.”).
Regarding claim 25, claim 25 is directed to a non-transitory computer-readable storage medium storing instructions that when executed by one or more processors cause the processors to implement the method implemented by the computing device of claim 26 and, as such, is rejected for the same reasons applied above in the rejection of claim 26.
Regarding claim 1, claim 1 is directed to the method implemented by the computing device of claim 26 and, as such, is rejected for the same reasons applied above in the rejection of claim 26.
Regarding claim 6 (depends on claim 1), FRAKES discloses that the pattern information further comprises at least one of:
symmetry information of the patterns indicating which of the patterns are symmetrical (This limitation need does not need to be met since at least one of the alternative limitations has been met.);
a total number of the patterns in the garment (¶ [0033]: “As an example, if a garment model file includes a garment model of a shirt type garment, then the garment pre-processor can determine that the garment model file should include at least one garment panel for each of the body landmarks: chest and abdomen, back, left arm, right arm, and shoulders. If the garment model file includes a dress type garment, then the garment pre-processor can determine that the garment model file should include at least one garment panel for each of the body landmarks: front, and back.” ); and
internal line segment information of the patterns comprising at least one of a notch of the patterns, a sewing line of the patterns, a cut line of the patterns, a dart line of the patterns, a length of each line segment of the patterns, or a curvature of each line segment of the patterns (¶ [0029]: “In some implementations, a 3D garment model can be represented by or generated from or based on a computer-aided design (CAD) model file (e.g., .DXF file). The garment model file can include data representing a garment type and one or more garment panel(s). The garment type can correspond to a general classification of a garment, such as, for example, a shirt, pants, top, dress, etc. The garment panel(s) can correspond to cutting patterns for the garment. The garment panel(s) can be represented within the garment model file, for example, as blocks or pieces. Each of the garment panel(s) can also have or otherwise be associated with one or more garment feature(s). The garment feature(s) can include, for example, a dart, pocket, placket, j-curve, yolk, dart, embroidery, buttons, etc. The garment feature(s) can be represented within the garment model file, for example, as lines or patterns in one or more layers of each block or piece.” ¶ [0035]: “In some implementations, optionally subsequent to classifying the garment panel, the garment pre-processor can identify one or more garment feature(s) associated with a garment panel in a garment model file. For example, the garment pre-processor can use a probability lookup table, machine-learned feature recognition model, and/or other pattern recognition algorithms to identify the garment feature(s) based on lines or patterns in one or more layers of one or more blocks or pieces in the garment model file.” ¶ [0028]: “In some implementations, the garment pre-processor can generate a model of a garment from one or more garment panels associated with such garment. For example, the one or more garment panels can be stored as a two-dimensional (2D) cutting pattern, a 2D mesh file, a 3D mesh file, or any other suitable format. One or more stitch lines/curves and one or more respective attachment points can be determined for each of the one or more garment panels. A finite element solver can be used to perform stitching by connecting the one or more stitch lines/curves in 3D along the one or more respective attachment points to create a 3D stitched garment.” ¶ [0040]: “The stitching sequence can include, for example, a stitching order and/or different stitching techniques for stitching the garment panel(s) along one or more stitch lines/curves and one or more attachment points for the garment panel(s).” ¶ [0073]: “The garment pre-processor 140 can obtain the one or more garment panels from the memory 120. The garment pre-processor 140 can determine one or more stitch lines/curves and one or more respective attachment points for each of the one or more garment panels.”).
Regarding claim 7 (depends on claim 1), FRAKES discloses:
receiving supplemental information comprising at least one of a size of the 3D avatar (¶ [0038]: “As an example, if a garment panel is for a right arm body landmark, then the garment pre-processor can position a midpoint of the garment panel at an elbow location within a right arm region on the 3D body model. In addition, the garment pre-processor can position a should end of the garment panel at a shoulder location within the right arm region on the 3D body model. In addition, before positioning the midpoint, the garment pre-processor can determine if a length of the garment panel is sufficient for the garment panel to extend from the shoulder location to the elbow location.” NOTE: In order to determine if a length of the garment panel is sufficient to extend from the shoulder location to the elbow location, the size of the 3D body model (e.g., the length from the shoulder to the elbow) must be known. Thus, a size of the 3D body model must be received.), positions of the arrangement points on the 3D avatar (e.g., ¶ [0036]: “The garment pre-processor can receive a 3D body model that includes one or more predetermined body landmark(s) associated with a predetermined region or location on the 3D body model.” ¶ [0112]: “the one or more predetermined body landmarks associated with the body model,”), or a size of an arrangement plate comprising the arrangement points (This limitation need does not need to be met since at least one of the alternative limitations has been met.) (¶ [0032]: “In some implementations, the garment pre-processor can parse a garment model file (e.g., CAD file) to identify one or more garment panel(s), and classify each of the identified garment panel(s). The garment pre-processor can classify the identified garment panel(s) according to one or more body landmark(s). Each of the body landmark(s) can be associated with a specific region or location on a 3D body model. A body landmark can be associated with, for example, a front, back, chest, abdomen, seat, left/right arm, left/right leg, waist, neck, shoulders, left/right elbow, left/right knee, left/right wrist, etc. on a 3D body model. In some implementations, the garment model file can include data representing a body landmark classification for one or more garment panel(s), and the garment pre-processor classify the garment panel(s) based on the body landmark classification information. In some implementations, the garment pre-processor can classify the garment panel(s) based on a garment type and/or a garment panel shape.” ¶ [0033]: “As an example, if a garment model file includes a garment model of a shirt type garment, then the garment pre-processor can determine that the garment model file should include at least one garment panel for each of the body landmarks: chest and abdomen, back, left arm, right arm, and shoulders.” ¶ [0036]: “In some implementations, the garment pre-processor can position the identified garment panel(s) on a 3D body model. The garment pre-processor can position the identified garment panel(s) based on the body landmark classification of the garment panel(s). The garment pre-processor can receive a 3D body model that includes one or more predetermined body landmark(s) associated with a predetermined region or location on the 3D body model. Alternatively, the garment pre-processor can obtain a template 3D body model that includes one or more predetermined body landmark(s) associated with a predetermined region or location on the 3D body model. The garment pre-processor can position and hold the garment panel(s) at a specific region or location on the 3D body model by matching the body landmark classification of the garment panel(s) with the predetermined body landmark(s) associated with the 3D body model, so that the garment panel(s) can be stitched to create a 3D garment model representing the garment. For example, if a garment panel is for a chest body landmark, then the garment pre-processor can position the garment panel at a region or location on a 3D body model that is associated with the chest body landmark (e.g., the chest region on the 3D body model).” ¶ [0037]: “In some implementations, the garment pre-processor can use a rules based approach to position the garment panel(s) on the 3D body model. In particular, the garment pre-processor can use the rules based approach (e.g., a garment pre-processing template) to fine-tune a location within a region on the 3D body model, and determine a segment or point within a garment panel that corresponds to the location. The garment pre-processor can position the segment or point within the garment panel at the location within the region on the 3D body model. The rules based approach can be based on a predetermined set of rules obtained from the garment pre-processing template.” ¶ [0038]: “As an example, if a garment panel is for a right arm body landmark, then the garment pre-processor can position a midpoint of the garment panel at an elbow location within a right arm region on the 3D body model. In addition, the garment pre-processor can position a should end of the garment panel at a shoulder location within the right arm region on the 3D body model. In addition, before positioning the midpoint, the garment pre-processor can determine if a length of the garment panel is sufficient for the garment panel to extend from the shoulder location to the elbow location.” ¶ [0071]: “In some implementations, the body pre-processor 130 can skip compression. For example, the body pre-processor 130 can select a body model representing a template body (e.g., for use by a body morphing approach), or select a body model representing a target body (e.g., for use by a garment stitching approach).” ¶ [0112]: “At (603), the method 600 can include positioning the garment panels onto the body model. For example, the garment pre-processor 140 can classify each of the garment panels according to one or more body landmarks, and match the garment panels with the one or more predetermined body landmarks associated with the body model, in order to position the garment panels on the body model. In addition, the garment pre-processor 140 can use a rules based approach to fine-tune a location at which to position the garment panels on the body model.” ¶ [0116]: “FIG. 9 depicts an example graphical diagram of preparing a garment model using a garment-stitching approach, according to example embodiments of the present disclosure. The garment panels 910 can be obtained, for example, by the garment pre-processor 140 from memory 120. The garment pre-processor 140 can classify the garment panels 910 according to body landmarks. In particular, the garment panels 910 can be classified according to the body landmarks: left arm, right arm, front left side torso, front right side torso, back left side, and back right side. The garment pre-processor 140 can position the garment panels 910 by matching the garment panels with the one or more predetermined body landmarks associated with the body model 901, based on the classification of the garment panels 910. The garment pre-processor 140 can determine a stitching sequence 911, and perform stitching by connecting the garment panels 910 according to the stitching sequence 911 in 3D. The garment pre-processor 140 can perform the stitching of the garment with or without the body model 901 between garment panels.”); and
feeding the supplemental information to the neural network model for predicting the arrangement points (¶ [0035]: “the garment pre-processor can use a probability lookup table, machine-learned feature recognition model, and/or other pattern recognition algorithms to identify the garment feature(s) based on lines or patterns in one or more layers of one or more blocks or pieces in the garment model file.” ¶ [0036]: “In some implementations, the garment pre-processor can position the identified garment panel(s) on a 3D body model. The garment pre-processor can position the identified garment panel(s) based on the body landmark classification of the garment panel(s). The garment pre-processor can receive a 3D body model that includes one or more predetermined body landmark(s) associated with a predetermined region or location on the 3D body model. Alternatively, the garment pre-processor can obtain a template 3D body model that includes one or more predetermined body landmark(s) associated with a predetermined region or location on the 3D body model. The garment pre-processor can position and hold the garment panel(s) at a specific region or location on the 3D body model by matching the body landmark classification of the garment panel(s) with the predetermined body landmark(s) associated with the 3D body model, so that the garment panel(s) can be stitched to create a 3D garment model representing the garment.” ¶ [0112] At (603), the method 600 can include positioning the garment panels onto the body model. For example, the garment pre-processor 140 can classify each of the garment panels according to one or more body landmarks, and match the garment panels with the one or more predetermined body landmarks associated with the body model, in order to position the garment panels on the body model.” ¶ [0116]: “FIG. 9 depicts an example graphical diagram of preparing a garment model using a garment-stitching approach, according to example embodiments of the present disclosure. The garment panels 910 can be obtained, for example, by the garment pre-processor 140 from memory 120. The garment pre-processor 140 can classify the garment panels 910 according to body landmarks. In particular, the garment panels 910 can be classified according to the body landmarks: left arm, right arm, front left side torso, front right side torso, back left side, and back right side. The garment pre-processor 140 can position the garment panels 910 by matching the garment panels with the one or more predetermined body landmarks associated with the body model 901, based on the classification of the garment panels 910.”).
Regarding claim 8 (depends on claim 1), FRAKES discloses:
predicting sewing information indicating pairs of line segments of the patterns to be sewn and directions in which the line segments are to be sewn (¶ [0027]: “Stitching can be performed by stitching different edges of a garment cutting pattern according to a specific sequence.” ¶ [0028]: “One or more stitch lines/curves and one or more respective attachment points can be determined for each of the one or more garment panels. A finite element solver can be used to perform stitching by connecting the one or more stitch lines/curves in 3D along the one or more respective attachment points to create a 3D stitched garment.” ¶ [0031]: “In some implementations, a garment pre-processing template can provide a complete set of assembly instructions for an associated garment. The garment pre-processing template can be used while parsing a garment model file. The template can include instructions for positioning the associated garment; identifying stitch lines; and assembling one or more garment panel(s) on a 3D body model. The template can include a set of rules which use a two-dimensional position of the garment panel(s) and the 3D body model to define a transformation required to position the garment panel(s) around the 3D body model in a contract free state. The rules align the garment panel(s) so that the 3D body model has minimal interference with the garment as the garment panel(s) are stitched. The template can also use one or more boundary identifier(s) in the garment model file to define boundary pairs of garment panel(s) to be stitched during the assembly process. In addition, the template can include rules for handling optional design features in the garment model file (e.g., identified through automated feature recognition). The garment pre-processing template can include an orientation, order, and direction of assembly for the garment panel(s) so that the assembly around the 3D body model can be performed in a stable and efficient manner. The garment stitching can occur either sequentially between two garment panels (e.g., similar to manually stitching two pieces of cloth) or in a single step (e.g., fusing two pieces of cloth instantaneously). Multiple pairs of garment panels can also be fused concurrently. ¶ [0040]: “the garment pre-processor can determine an appropriate stitching sequence to stitch garment panel(s) to create a 3D garment model representing a garment. The stitching sequence can be based on, for example, a size, position, material properties, garment feature(s), etc. of the garment panel(s). The stitching sequence can include, for example, a stitching order and/or different stitching techniques for stitching the garment panel(s) along one or more stitch lines/curves and one or more attachment points for the garment panel(s). In some implementations, a rules based approach can be used to determine the stitching sequence. For example, a rule may specify that panels classified as a first type must be stitched prior to panels classified as a second type, and so forth according to rules that describe different priorities of panel types and combinations of panel types.” ¶ [0073]: “In some implementations, the garment pre-processor 140 can prepare the garment model by stitching one or more garment panels along one or more stitch lines or curves. For example, the garment pre-processor 140 can generate a garment model of a garment based at least in part on one or more garment panels associated with the garment. The garment pre-processor 140 can obtain the one or more garment panels from the memory 120. The garment pre-processor 140 can determine one or more stitch lines/curves and one or more respective attachment points for each of the one or more garment panels.” ¶ [0113]: “At (604), the method 600 can include stitching the garment panels to prepare the garment model. For example, the garment pre-processor 140 can analyze the garment panels, and determine a stitching sequence based on a size, position, material properties, or garment features associated with the garment panels. The stitching sequence can include, for example, a stitching order and/or different stitching techniques for stitching the garment panels along one or more stitch lines/curves and one or more attachment points. The garment pre-processor 140 can stitch each of the garment panels along the one or more stitch lines/curves and one or more respective attachment points.”).
Regarding claim 22 (depends on claim 1), FRAKES teaches that the arranging of at least the subset of patterns comprises:
arranging a superimposing pattern of the patterns (e.g., such as a chest pocket panel of a t-shirt; ¶ [0029]: “garment feature” which can include a ¶ [0029]: “pocket”) on a base pattern to which the superimposing pattern is imposed (e.g., such as the front body panel of a t-shirt; ¶ [0029]: “Each of the garment panel(s) can also have or otherwise be associated with one or more garment feature(s).”) (¶ [0029]: “The garment panel(s) can correspond to cutting patterns for the garment. The garment panel(s) can be represented within the garment model file, for example, as blocks or pieces. Each of the garment panel(s) can also have or otherwise be associated with one or more garment feature(s). The garment feature(s) can include, for example, a dart, pocket, placket, j-curve, yolk, dart, embroidery, buttons, etc. The garment feature(s) can be represented within the garment model file, for example, as lines or patterns in one or more layers of each block or piece.” NOTE: In other words, a pocket garment feature can be a pattern in a layer of a garment panel block or piece. ¶ [0035]: “In some implementations, optionally subsequent to classifying the garment panel, the garment pre-processor can identify one or more garment feature(s) associated with a garment panel in a garment model file. For example, the garment pre-processor can use a probability lookup table, machine-learned feature recognition model, and/or other pattern recognition algorithms to identify the garment feature(s) based on lines or patterns in one or more layers of one or more blocks or pieces in the garment model file. The pattern recognition algorithm for identifying the one or more garment feature(s) can be similar to the pattern recognition algorithm to classify each panel in the garment model file described above.” ¶ [0113]: “At (604), the method 600 can include stitching the garment panels to prepare the garment model. For example, the garment pre-processor 140 can analyze the garment panels, and determine a stitching sequence based on a size, position, material properties, or garment features associated with the garment panels. The stitching sequence can include, for example, a stitching order and/or different stitching techniques for stitching the garment panels along one or more stitch lines/curves and one or more attachment points. The garment pre-processor 140 can stitch each of the garment panels along the one or more stitch lines/curves and one or more respective attachment points.” NOTE: If the garment is a common t-shirt including a chest pocket “garment feature” sewn onto the front panel of the t-shirt (which is a well-known common t-shirt construction, an example of which is shown below), in order to correctly assemble the garment, one of ordinary skill in the art would understand that the exterior pocket panel (i.e., the superimposing pattern) would, by necessity, need to be arranged superimposed onto the front body panel of the t-shirt (i.e., the base pattern) in order to be stitched onto the front panel of the t-shirt. In other words, if the garment shown in FIG. 9, included a garment feature comprising a front exterior chest pocket, the panel of the chest pocket would have to be arranged superimposed one of the front torso panels shown in FIG. 9.).
PNG
media_image1.png
183
275
media_image1.png
Greyscale
Regarding claim 24 (depends on claim 1), FRAKES discloses:
wherein the pattern information comprises an image of each of the patterns (e.g., ¶ [0028]: “the one or more garment panels can be stored as a two-dimensional (2D) cutting pattern, a 2D mesh file, a 3D mesh file, or any other suitable format.” ¶ [0029]: “In some implementations, a 3D garment model can be represented by or generated from or based on a computer-aided design (CAD) model file (e.g., .DXF file). The garment model file can include data representing a garment type and one or more garment panel(s). The garment type can correspond to a general classification of a garment, such as, for example, a shirt, pants, top, dress, etc. The garment panel(s) can correspond to cutting patterns for the garment. The garment panel(s) can be represented within the garment model file, for example, as blocks or pieces. Each of the garment panel(s) can also have or otherwise be associated with one or more garment feature(s). The garment feature(s) can include, for example, a dart, pocket, placket, j-curve, yolk, dart, embroidery, buttons, etc. The garment feature(s) can be represented within the garment model file, for example, as lines or patterns in one or more layers of each block or piece.” ¶ [0030]: “one or more garment panel(s) in a garment model file (e.g., CAD file, DXF) using pattern recognition.” ¶ [0032]: “the garment pre-processor can parse a garment model file (e.g., CAD file) to identify one or more garment panel(s), and classify each of the identified garment panel(s).” NOTE: As is well-known in the art, DXF is short for Drawing Exchange Format or Drawing Interchange Format. A DXF file stores images as vector images (i.e., images stored in a vector format). Clearly, one of ordinary skill in the art would understand that the 2D cutting patterns (i.e., “blocks or pieces”) in a DXF file (which can include lines or patterns in each block or piece) can be vector images of each of the panels (i.e., “patterns).).
Claims 9 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over FRAKES et al. (US 2020/0027155, hereinafter “FRAKES”) in view of GOLDBERG et al. (US 2011/0245633, hereinafter “GOLDBERG”), further in view of CEYLAN AKSIT et al. (US 2023/0326137, hereinafter “CEYLAN AKSIT”).
Regarding claim 9 (depends on claim 1), whereas FRAKES and GOLDBERG may not be explicit as to, CEYLAN AKSIT teaches:
wherein the neural network model is trained (e.g., ¶ [0055]: “training subsystem trains the machine learning models”) by backpropagating a difference (e.g., ¶ [0055]: “An error between output generated by decoder neural network and the ground truth is backpropagated.”) between predicted sewing information (e.g., ¶ [0055]: “rendered garment R.sub.t.sup.p synthesized by the network”) and correct sewing information (e.g., ¶ [0055]: “ground truth image” NOTE: In other words, the difference between the rendered garment synthesized by the network and the garment in ground truth image represents a difference between a garment sewn correctly and a garment sewn based on predicted sewing information.) (¶ [0055]: “In some embodiments, the training subsystem trains the machine learning models prior to the processing of FIGS. 3 and 4. In some embodiments, the training subsystem of the rendering system trains the decoder of the second machine learning model G and the neural texture subsystem F end-to-end and jointly. Given a rendered garment R.sub.t.sup.p synthesized by the network and a ground truth image I.sub.t.sup.p, one or more loss functions are minimized. For example, the decoder is a generator of a GAN implemented as a neural network (e.g., a decoder neural network). An error between output generated by decoder neural network and the ground truth is backpropagated. Values of weights associated with connections in the decoder are updated. The process is repeated, computing the loss and readjusting the weights, until an output error is below a predetermined threshold. The training subsystem performs multiple iterations of a training procedure to minimize a loss function to update values of parameters of the decoder (e.g., the weights). In some implementations the loss function includes a perceptual loss and an adversarial loss.”).
Thus, in order to train the machine-learned pattern recognition model (e.g., neural network) used in the garment simulation system taught by FRAKES, it would have been obvious to one of ordinary skill in the art to have trained the neural network by backpropagating a difference between predicted sewing information and correct sewing information, as taught by CEYLAN AKSIT.
Regarding claim 10 (depends on claim 1), whereas FRAKES and GOLDBERG may not be explicit as to, CEYLAN AKSIT teaches:
wherein the neural network model is trained (e.g., ¶ [0055]: “training subsystem trains the machine learning models”) by backpropagating a loss (e.g., ¶ [0055]: “An error between output generated by decoder neural network and the ground truth is backpropagated.” ¶ [0055]: “to minimize a loss function”) derived from a length between line segments of the patterns to be sewn to each other (e.g., ¶ [0055]: “Given a rendered garment R.sub.t.sup.p synthesized by the network and a ground truth image I.sub.t.sup.p, one or more loss functions are minimized.” NOTE: One of ordinary skill in the art would understand that differences in a length between line segments of patterns sewn to each other in the rendered garment and line segments of the patterns sewn together in the ground truth image are one possible error (out of limited number of possible errors) between output generated by decoder neural network and the ground truth to minimized by backpropagation.) (¶ [0055]: “In some embodiments, the training subsystem trains the machine learning models prior to the processing of FIGS. 3 and 4. In some embodiments, the training subsystem of the rendering system trains the decoder of the second machine learning model G and the neural texture subsystem F end-to-end and jointly. Given a rendered garment R.sub.t.sup.p synthesized by the network and a ground truth image I.sub.t.sup.p, one or more loss functions are minimized. For example, the decoder is a generator of a GAN implemented as a neural network (e.g., a decoder neural network). An error between output generated by decoder neural network and the ground truth is backpropagated. Values of weights associated with connections in the decoder are updated. The process is repeated, computing the loss and readjusting the weights, until an output error is below a predetermined threshold. The training subsystem performs multiple iterations of a training procedure to minimize a loss function to update values of parameters of the decoder (e.g., the weights). In some implementations the loss function includes a perceptual loss and an adversarial loss.” ).
Thus, in order to train the machine-learned pattern recognition model (e.g., neural network) used in the garment simulation system taught by FRAKES, it would have been obvious to one of ordinary skill in the art to have trained the neural network by backpropagating a loss derived from a length between line segments of the patterns to be sewn to each other, as taught by CEYLAN AKSIT.
Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over FRAKES et al. (US 2020/0027155, hereinafter “FRAKES”) in view of GOLDBERG et al. (US 2011/0245633, hereinafter “GOLDBERG”), further in view of ISOGAI et al. (US 2011/0022372, hereinafter “ISOGAI”).
Regarding claim 23 (depends on claim 1), whereas FRAKES and GOLDBERG may not be explicit as to, ISOGAI teaches that the patterns (col. 2, lines 18-21: “paper pattern model acquisition means for acquiring a paper pattern model showing a two-dimensional shape of a pattern paper of a clothing that is fitted virtually to the human body model;” col. 2, lines 32-38: “the fitting means sets a temporary model that is formed so as to cover a predetermined section of the human body model, deforms the paper pattern model to bring the paper pattern model into contact with the temporary model, and thereafter deforms the paper pattern model to bring the paper pattern model into contact with the human body model.”) comprise:
information on corresponding arrangement points (e.g., col. 12, line 23: “predefined contact definitions” FIGS. 10, 12, 14, 15 and 17) and arrangement plates (e.g., FIG. 8: “temporary models M1, M2, M3”) comprising the arrangement points (e.g., col. 10, 59-60: “definition for the contact targets of the paper pattern model and the temporary model”; FIGS. 9, 11, 13 and 16.) (Abstract: “a temporary model to cover a predetermined section of the human body model, deforms the paper pattern model to bring the paper pattern model into contact with the temporary model, and thereafter deforms the paper pattern model to bring the paper pattern model into contact with the human body model.” col. 4, lines 17-19: “a cylindrical, ellipsoidal, saddle-shaped, polygonal pyramidal or circular conical shape with no bumps is adopted for the temporary model.” col. 12, lines 22-24: “the fitting part 22 stretches each of the paper pattern models in accordance with predefined contact definitions shown in FIG. 10.” col. 4, lines 34-40: “the clothing be an upper-body clothing, that the paper pattern model include paper pattern models for a front body part, back body part, left sleeve and right sleeve, and that the fitting means set a cylindrical temporary model on a left arm and right arm of the human body and set the cylindrical temporary model such that an axial direction thereof follows a straight line connecting both shoulders.” col. 11, lines 17-20: “In this flowchart, the T-shirt is adopted as the clothing. Therefore, the paper pattern model includes four paper pattern models for a front body part FM, back body part BM, left sleeve LS and right sleeve RS.” col. 11, lines 33-41: “As shown in FIG. 8, the temporary model is in the shape of a cylinder and includes three temporary models: a temporary model M1 that is disposed to cover the entire left arm, a temporary model M2 that is disposed to cover the entire right arm, and a temporary model M3 that is disposed to cover the joints between each shoulder and each arm such that the axial direction thereof follows a straight line connecting the both shoulders.” See FIG. 7 and FIG. 8. col. 12, lines 56-60: “the fitting part 22 carries out the stitching processes according to predefined contact definitions shown in FIG. 12. As shown in the first and second columns of FIG. 12, contacts of the front and back body parts to the human body model and the temporary model M3 are defined.” col. 12, lines 66-67: “contacts of the right and left sleeves to the temporary models M1 and M2 are defined.” col. 14, lines 32-33: “contacts of the front and back body parts to the temporary model M3 are defined” col. 14, lines 39-40: “contacts of the right and left sleeves to the temporary models M1 and M2 are defined” See FIGS. 7-17. MOTIVATION - col. 3, lines 31-37: “the paper pattern model is deformed to come into contact with the temporary model and is thereafter deformed to come into contact with the human body model, thus the solutions of the motion equations can be converged easily, and the wrinkles that are generated when the paper pattern model is deformed to come into contact with the human body model can be reduced.” col. 4, lines 8-13: “because the temporary model has a smoothly-curved surface, it is possible not only to converge the solutions of the motion equations easily, but also to prevent the generation of wrinkles and at the same time to deform the paper pattern model so that it comes into tight contact with the human body model.”).
Thus, in order to obtain a garment modeling system having the cumulative features and/or functionalities taught by FRAKES and ISOGAI, it would have been obvious to one of ordinary skill in the art to have modified FRAKES so as to incorporate patterns comprising information on corresponding arrangement points and arrangement plates comprising the arrangement points, as taught by ISOGAI.
Allowable Subject Matter
Claims 2-5 and 11-21 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
At present, it is not apparent to the examiner which part of the application could serve as a basis for new and allowable claims. However, should the applicant nevertheless regard some particular matter as patentable, the examiner encourages applicant to appropriately amend the claims to include such matter and to indicate in the REMARKS the difference(s) between the prior art and the claimed invention as well as the significance thereof.
Furthermore, should applicant decide to amend the claims, examiner respectfully requests that the applicant please indicate in the REMARKS from which page(s), line(s) or claim(s) of the originally filed application that any amendments are derived. See MPEP § 2163(II)(A) (There is a strong presumption that an adequate written description of the claimed invention is present in the specification as filed, Wertheim, 541 F.2d at 262, 191 USPQ at 96; however, with respect to newly added or amended claims, applicant should show support in the original disclosure for the new or amended claims.).
A shortened statutory period for reply to this action is set to expire THREE MONTHS from the mailing date of this action. Extensions of time may be available under the provisions of 37 CFR 1.136(a). In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Failure to reply within the set or extended period for reply will, by statute, cause the application to become ABANDONED (35 USC § 133).
Relevant Prior Art
The following prior art, although not relied upon, is made of record since it is considered pertinent to applicant's disclosure:
CHOCHE et al. (US 2020/0402126) discloses methods and systems for customizing a base digital file for a garment according to a user's body measurements and generating a custom digital file for the garment, which may be used in garment production.
WILCOX (US 2021/0383031) discloses computer implemented methods for generating a garment finish preset comprising assembly instructions for a garment finish for a garment to be fabricated, for automatically generating a garment finish preset comprising assembly instructions for a garment finish for a garment to be fabricated, and for automatically determining at least one candidate from a plurality of garment finish presets, each of said garment finish presets comprising assembly instructions for a garment finish for a garment to be fabricated from garment panels.
WILCOX (US 2023/0298273) discloses a method for fabricating a user-generated garment. In one embodiment, garment data related to a predefined or default garment is received, shape and finish pieces of the garment are separated, a template for each shape piece is selected that comprises information about a position and orientation relative to a human body, the shape pieces are assembled in 3D, a finish macro for each finish piece is selected that comprises assembling instructions for assembling the finish piece to the assembled shape pieces, a preliminary 3D garment is visualized on an avatar in a graphical user interface, a garment adjustment process is performed using the graphical user interface to generate a user-generated garment, and fabrication instructions for fabricating the user-generated garment are generated.
LIANG et al. (US 2023/0306699) discloses a system for improved 3D garment draping simulation. A garment pattern may be obtained that includes a number of flat, 2D garment panels designated to be connected at seam lines. Triangulated versions of each of the 2D garment panels may then be positioned in 3D virtual space relative to a 3D model of a human body, such that one or more annotated points on each triangulated garment panel are aligned with a corresponding labelled point or region on the 3D body. A warped 3D garment mesh may then be generated by repeatedly applying geometric manipulations to the triangulated garment panels to connect their corresponding seam lines without causing collisions between the triangulated garment panels and the 3D body. This warped 3D garment may then be provided as input to a physics-based draping simulator.
VIDAURRE et al. (Vidaurre R, Santesteban I, Garces E, Casas D. “Fully convolutional graph neural networks for parametric virtual try‐on.” In Computer Graphics Forum 2020 Dec (Vol. 39, No. 8, pp. 145-156).) discloses a learning-based approach for virtual try-on applications based on a fully convolutional graph neural network that predicts the 3D draping for an arbitrary body shape and garment parameters comprising different 2D panel configurations for paremeterized garment types.
CHEN et al. (Chen X, Wang G, Zhu D, Liang X, Torr P, Lin L. “Structure-preserving 3d garment modeling with neural sewing machines.” Advances in Neural Information Processing Systems. 2022 Dec 6;35:15147-59. ) discloses a Neural Sewing Machine (NSM), a learning-based framework for structure-preserving 3D garment modeling, which is capable of learning representations for garments with diverse shapes and topologies. To model generic garments, the system first obtains sewing pattern embedding via a unified sewing pattern encoding module, as the sewing pattern can accurately describe the intrinsic structure and the topology of the 3D garment. Then a 3D garment decoder is used to decode the sewing pattern embedding into a 3D garment using the UV-position maps with masks. An inner-panel structure-preserving loss, an inter-panel structure-preserving loss, and a surface-normal loss is used in the learning process to preserve the intrinsic structure of the predicted 3D garment.
WANG et al. (Wang Z, Tao X, Zeng X, Xing Y, Xu Z, Bruniaux P. "Design of customized garments towards sustainable fashion using 3D digital simulation and machine learning-supported human–product interactions.” International Journal of Computational Intelligence Systems. 2023 Feb 16;16(1):16.) discloses an interactive design approach for customized garments towards sustainable fashion using machine learning techniques, including radial basis function artificial neural network (RBF ANN), genetic algorithms (GA), probabilistic neural network (PNN), and support vector regression (SVR).
SHEN et al. (Shen Y, Liang J, Lin MC. “Gan-based garment generation using sewing pattern images.” In European Conference on Computer Vision 2020 Aug 23 (pp. 225-247). Cham: Springer International Publishing.) discloses a GAN-based generator that creates different types of garment meshes, given the garment design (or sewing) patterns.
BERTHOUZOZ et al. (Berthouzoz F, Garg A, Kaufman DM, Grinspun E, Agrawala M. “Parsing sewing patterns into 3D garments.” Acm Transactions on Graphics (TOG). 2013 Jul 21;32(4):1-2.) discloses a system for automatically parsing existing sewing patterns and converting them into 3D garment models. The parser takes a sewing pattern in PDF format as input and starts by extracting the set of panels and styling elements (e.g. darts, pleats and hemlines) contained in the pattern. It then applies a combination of machine learning and integer programming to infer how the panels must be stitched together to form the garment. The system includes an interactive garment simulator that takes the parsed result and generates the corresponding 3D model.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VINCENT PEREN who can be reached by telephone at (571) 270-7781, or via email at vincent.peren@uspto.gov. The examiner can normally be reached on Monday-Friday from 10:00 A.M. to 6:00 P.M.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KING POON, can be reached at telephone number (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
/VINCENT PEREN/
Examiner, Art Unit 2617
/KING Y POON/Supervisory Patent Examiner, Art Unit 2617