DETAILED ACTION
Claims 1-10, 13-14, 16-22, and 24 are pending in the present application.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant's claim for foreign priority under 35 U.S.C. 119(a)-(d). The certified copy of United Kingdom patent application number GB2109828.0 and GB2109831.4 filed on 07/07/2021 has been received and made of record.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 01/08/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 7,18, 20, and 24 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claims 7 and 18, the phrase "optionally” renders the claim indefinite because it is unclear whether the limitation(s) following the phrase are part of the claimed invention. See MPEP § 2173.05(d).
Regarding claim 24, the phrase "for example” renders the claim indefinite because it is unclear whether the limitation(s) following the phrase are part of the claimed invention. See MPEP § 2173.05(d).
Regarding claim 20, the phrase "such as" renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-8, 13-14, 16-17, 21-22 and 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2023/0371634 to TerKonda in view of Kim et al. (KIM YOUNGJUN ET AL: "3D virtual simulator for breast plastic surgery", COMPUTER ANIMATION AND VIRTUAL WORLDS, vol. 19, no. 3-4, 1 January 2008 (2008-01-01), pages 515-526, XP055970248, GB ISSN: 1546-4261, DOI: 10.1002/cav.237).
Regarding claim 1, TerKonda teaches a computer-implemented method of determining morphology of a human breast (par 0009), the method comprising:
obtaining at least one image of a human subject (par 0009, “The multiple digital images of the torso of the subject may include three or more digital images at differing angles between the torso of the subject and a camera that captures the three or more digital images. Additionally, images may be captured with a video enabled device “, par 0028-0031, “a user installs and interacts with an application on a user device (e.g., a smailphone, tablet, laptop, computer) to obtain images of the user breasts (in box 202)”);
extracting features of at least a portion of the subject's body from the at least one image, wherein the features correspond to a model of standard human anatomy (par 0009, “The processing may include morphological image processing to extract image components representing anatomical components of the subject “, par 0044-0045, “ the image processing includes morphological image processing to extract image components representing anatomical landmarks of the chest wall and breasts. Additional processing of the digital models allows for body identification (in box 506). Body identification selects data from the digital model representing the body of the user. Segmentation (in box 508) of the digital model is performed to identify body parts. ….. From the segmented digital model, the networked computing device further performs body part feature extraction (in box 510) to determine and label anatomical landmarks (e.g., body parts) of the torso, chest wall, and breasts such as the sternal notch, xiphoid, nipple, areola, inframammary fold or anterior axillary line”);
generating a three-dimensional model of the subject's body based on the extracted features (par 0009, “The multiple digital images of the torso of the subject may include three or more digital images at differing angles between the torso of the subject and a camera that captures the three or more digital images. Additionally, images may be captured with a video enabled device”, par 0031, “in box 204 the mobile application then creates a digital model for differential analysis of the left and right breast thereby distinguishing differences in breast characteristics, e.g., volume, shape, position, as well as other properties. In some implementations, the digital model is a three-dimensional (3D) digital model including spatial information relating to three spatial dimensions. The process includes a camera-enabled mobile device (e.g., smartphone, tablet, laptop, remote camera, etc.) application to obtain images containing depth information used in calculation of the user's breast measurements. In some embodiments, the digital model is transmitted to a networked computing device for additional image processing (in box 204) which can include the use of machine learning algorithms. A differentiated digital model is calculated based upon the transmitted digital model including labeled anatomical components and approximations of the dimensions and asymmetries of the breasts”, par 0042-0044, “Additional image processing to create a 3D reconstruction and labeling of anatomical parts can take place on the user device or, for example, on a networked computing device (e.g., an image processing server) (in box 504) …. the image processing includes morphological image processing to extract image components representing anatomical landmarks of the chest wall and breasts. Additional processing of the digital models allows for body identification (in box 506). Body identification selects data from the digital model representing the body of the user.”); and
determining a morphological parameter of the subject's breast from the three-dimensional model of the subject's body (par 0006, “disclosed herein is a mobile device application installed on a user device and image processing system which receives multiple images or video of a user and processes the image into a digital model representing the 3D structure of the body of a user. The mobile application can identify parts of a body, such as breasts and/or chest of a user. The system processes the digital model to determine characteristics of the user breasts and chest wall including volume, shape, projection, position, and asymmetry “, par 0031, “in box 204 the mobile application then creates a digital model for differential analysis of the left and right breast thereby distinguishing differences in breast characteristics, e.g., volume, shape, position, as well as other properties. In some implementations, the digital model is a three-dimensional (3D) digital model including spatial information relating to three spatial dimensions “, par 0048, “The image processing further includes determining asymmetries in volume, shape, position, and/or projection of the breasts (in box 512). The networked computing device utilizes a subtraction algorithm to determine asymmetries of the breasts and chests wall”).
But TerKonda keeps silent for teaching generating a three-dimensional model of the subject's body based on the extracted features and the model of standard human anatomy.
In related endeavor, Kim et al. teach extracting features of at least a portion of the subject's body from the at least one image, wherein the features correspond to a model of standard human anatomy (p.516,seci. Overview and fig. 1: "photographs of subject with input feature points" and "3D template model body with defined feature points"; cf. p. 517-518, sect. Template Model and sect. Feature point, last two par.; figs. 2-3); generating a three-dimensional model of the subject's body based on the extracted features and the model of standard human anatomy (p. 516, sect. Overview: "Then, according to the input feature points on the photographs, a template 3D model is morphed be using our deformation methods"; cf. p. 518-520, sections Global Deformation, Local Deformation and Photo-Mapping).
It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified TerKonda to include generating a three-dimensional model of the subject's body based on the extracted features and the model of standard human anatomy as taught by Kim et al. to render a 3D virtual model through deforming a standard template model according to the customer feature image to develop a realistic and intuitive 3D simulator for breast augmentation surgery.
Regarding claim 2, TerKonda as modified by Kim et al. teaches all the limitation of claim 1, and Kim et al. further teach wherein generating the three-dimensional model of the subject further comprises deforming the model of standard human anatomy based on the extracted features (abstract, Our image-based modeling method utilizes a template model, and this is deformed according to the patient’s photographs, Fig 1, section overview, according to the input feature points on the photographs, a template 3D model is morphed by using our deforming methods, section Image-Based 3D Torso Body Modeling, we deformed the template model according to the relations of the feature points of the template model and those from the images). This would be obvious for the same reason given in the rejection for claim 1.
Regarding claim 3, TerKonda as modified by Kim et al. teaches all the limitation of claim 1, and Kim et al. further teach wherein deforming the model of the standard human anatomy comprises mapping anatomical markers of the model of standard human anatomy onto the extracted features (abstract, Our image-based modeling method utilizes a template model, and this is deformed according to the patient’s photographs, Fig 1, section overview, according to the input feature points on the photographs, a template 3D model is morphed by using our deforming methods, section Image-Based 3D Torso Body Modeling, we deformed the template model according to the relations of the feature points of the template model and those from the images), wherein the extracted features correspond to said anatomical markers for the subject's body (Fig 2, section Image-Based 3D Torso Body Modeling, we deformed the template model according to the relations of the feature points of the template model and those from the images. In this paper, we denote the template feature points as PT and the calculated feature points from the images as PC.). This would be obvious for the same reason given in the rejection for claim 1.
PNG
media_image1.png
298
416
media_image1.png
Greyscale
Regarding claim 4, TerKonda as modified by Kim et al. teaches all the limitation of claim 1, and Kim et al. further teach wherein the extracted features and/or the anatomical markers comprise the umbilicus and at least one of the shoulder joints (Fig 2, section Image-Based 3D Torso Body Modeling, LSP, RSP, and NP). This would be obvious for the same reason given in the rejection for claim 1.
Regarding claim 5, TerKonda as modified by Kim et al. teaches all the limitation of claim 1, and further teaches wherein extracting features of at least a portion of the subject's body further comprises extracting edge information of at least a portion of the subject's body from the image (TerKonda: par 0047, “From the segmented digital model, the networked computing device further performs body part feature extraction (in box 510) to determine and label anatomical landmarks (e.g., body parts) of the torso, chest wall, and breasts such as the sternal notch, xiphoid, nipple, areola, inframammary fold or anterior axillary line “, Kim et al. Figs 2-3, section Image-Based 3D Torso Body Modeling, detect body curve via skin detection).
Regarding claim 6, TerKonda as modified by Kim et al. teaches all the limitation of claim 5, and further teaches wherein extracting features of at least a portion of the subject's body further comprises identifying and measuring points of inflection based on the edge information (TerKonda: par 0046-0048, “The networked computing device calculates a depth slice interval based upon on the distance between the measured chest wall distance and nipple distance ….. From the segmented digital model, the networked computing device further performs body part feature extraction (in box 510) to determine and label anatomical landmarks (e.g., body parts) of the torso, chest wall, and breasts such as the sternal notch, xiphoid, nipple, areola, inframammary fold or anterior axillary line…. the data array can be used to determine distances such inter-nipple distance, e.g., the distance between each nipple in 3D space in relation to the distance between positions of each nipple in the 2D pixel array… The image processing further includes determining asymmetries in volume, shape, position, and/or projection of the breasts (in box 512). The networked computing device utilizes a subtraction algorithm to determine asymmetries of the breasts and chests wall “, Kim et al. Figs 2-3, Table 1, section Image-Based 3D Torso Body Modeling, extract feature points of breast to estimate/determine the shape and size).
Regarding claim 7, TerKonda as modified by Kim et al. teaches all the limitation of claim 5, and further teaches wherein the points of inflection are used to identify anatomical regions of interest, optionally wherein said anatomical regions of interest comprise a breast root (TerKonda: par 0047-0048, “From the segmented digital model, the networked computing device further performs body part feature extraction (in box 510) to determine and label anatomical landmarks (e.g., body parts) of the torso, chest wall, and breasts such as the sternal notch, xiphoid, nipple, areola, inframammary fold or anterior axillary line…. the data array can be used to determine distances such inter-nipple distance, e.g., the distance between each nipple in 3D space in relation to the distance between positions of each nipple in the 2D pixel array… The image processing further includes determining asymmetries in volume, shape, position, and/or projection of the breasts (in box 512). The networked computing device utilizes a subtraction algorithm to determine asymmetries of the breasts and chests wall “, Kim et al. Figs 2-3, section Image-Based 3D Torso Body Modeling, identify breast from body with feature points).
Regarding claim 8, TerKonda as modified by Kim et al. teaches all the limitation of claim 1, and Kim et al. further teach wherein generating the three-dimensional model of the subject further comprises scaling the three-dimensional model of the subject based on size information obtained from the image (section Global Deformation, generate 3D model through scaling the template 3D model wherein scale factor is determined based on the feature points from captured image). This would be obvious for the same reason given in the rejection for claim 1.
Regarding claim 13, TerKonda as modified by Kim et al. teaches all the limitation of claim 1, and further teaches wherein any or any combination of: (a) extracting features of at least a portion of the subject's body; and (b) generating the three-dimensional model of the subject's body; comprise using one or more machine-learned models (TerKonda: par 0009, “images may be captured with a video enabled device. The processing may be performed using computer vision and a machine learning model. The machine learning model may be a supervised machine learning model. The machine learning model may be an unsupervised machine learning model. The machine learning model may be a computer vision model. The processing may include morphological image processing to extract image components representing anatomical components of the subject. The processing may include body identification that selects data from the model. The digital model may be a digital three-dimensional model “, par 0047, “From the segmented digital model, the networked computing device further performs body part feature extraction (in box 510) to determine and label anatomical landmarks (e.g., body parts) of the torso, chest wall, and breasts such as the sternal notch, xiphoid, nipple, areola, inframammary fold or anterior axillary line”, Kim et al. Figs 2-3, section Image-Based 3D Torso Body Modeling, identify breast from body with feature points).
Regarding claim 14, TerKonda as modified by Kim et al. teaches all the limitation of claim 1, and further teaches wherein the three-dimensional model comprises any or any combination of: a skeleton model; and a three-dimensional volume model (TerKonda: par 0006, “processes the image into a digital model representing the 3D structure of the body of a user“, par 0031, “Still referring to FIG. 2, in box 204 the mobile application then creates a digital model for differential analysis of the left and right breast thereby distinguishing differences in breast characteristics, e.g., volume, shape, position, as well as other properties. In some implementations, the digital model is a three-dimensional (3D) digital model including spatial information relating to three spatial dimensions”, Kim et al. Figs 2, section overview and Image-Based 3D Torso Body Modeling, create a 3D model of a female torso).
Regarding claim 16, TerKonda as modified by Kim et al. teaches all the limitation of claim 1, and TerKonda further teaches further comprising determining garment parameters wherein determining garment parameters comprises: adjusting a virtual garment to fit the three-dimensional model; and determining at least one of: (a) a morphological parameter of the breast based on the adjusted virtual garment; and (b) a parameter of a real garment based on the adjusted virtual garment (par 0006, par 0023, par 0026, par 0031-0033, design a custom bra based on the 3D shape of customer image and construct customized bra components).
Regarding claim 17, TerKonda as modified by Kim et al. teaches all the limitation of claim 16, and TerKonda further teaches wherein the virtual garment comprises a support element configured to support at least a portion of the subject's breast and wherein adjusting the virtual garment to fit the three-dimensional model comprises: deforming a breast part of three-dimensional model of the subject to fit the support element, wherein the deforming is constrained to preserve volume of the breast part of the three- dimensional model (par 0023-0026, “Ready-to-wear bras are manufactured in standardized, symmetric combinations of breast cup, underlying support, and band sizes. Mass production of standardized bras does not account for bra cup, under lying support (e.g., underwire) or band customization to correct breast or chest asymmetries. Ready-to-wear garments designed to fit standardized analog sizes result in suboptimal fitting as breast and chest shape, size, and asymmetries vary along a continuum. Disclosed herein is an application in communication with a 3D printing platform for the manufacture of customized garments (e.g., bras, swimsuits, blouses, gowns, etc.) for the correction of breast and chest asymmetries. ….FIGS. 1A-1E are diagrams depicting five exemplary categories of breast asymmetries for which the disclosed system can advantageously produce customized breast support structures. Asymmetries of the breast and/or chest can result from natural variances, congenital deformities or post-surgical changes”).
Regarding claim 21, TerKonda as modified by Kim et al. teaches all the limitation of claim 13, and TerKonda further teaches the method comprising determining at least one of (i) a garment size and (ii) a manufacturing parameter of at least one of real prosthesis and a real garment based on the determined morphological parameters (par 0006, par 0031-0033, design a custom bra based on the 3D shape of customer image and construct customized bra components).
Regarding claim 22, TerKonda as modified by Kim et al. teaches all the limitation of claim 16, and TerKonda further teaches further comprising determining any or any combination of: bandline length; bandline location; breast-garment association; and garment size (par 0054, “The customized garment is assembled (in box 708) such that the printed components and additional material is constructed together to form a finished product capable of being worn by the user and correcting for breast and chest asymmetries. In some embodiments, the user inputs additional customization elements before the components are printed and/or assembled. Examples of additional customization elements for design and fit include fabric, fabric color, thread, thread color, stitch pattern geometry, embroidery, clasps, hooks, or buttons. The user inputs into the application one or more preferences or customizations for one or more components of a customized garment (e.g., bra, swim suit, shirt, or dress). For example, the user can select an underwire color, material, cut, pattern, design, style, type, size, length, or select from preset options stored in the user device memory”).
Regarding claim 24, TerKonda as modified by Kim et al. teaches all the limitation of claim 1, and further teaches wherein the image of the subject comprises the subject in a pre-determined pose, wherein the predetermined pose comprises a neutral spine and abduction of the arms at the shoulder, for example with the upper arms extended laterally in the coronal plane (TerKonda: par 0040-0041, “A schematic diagram of the process of FIG. 3A is shown in FIG. 4. A user 400 is shown a distance 410 from the user device 420. The distance separating the user device 420 and user 400 is sufficient to capture the left and right breasts within the image”, Kim et al. Figs 2-3).
Claim(s) 9-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2023/0371634 to TerKonda in view of Kim et al. (KIM YOUNGJUN ET AL: "3D virtual simulator for breast plastic surgery", COMPUTER ANIMATION AND VIRTUAL WORLDS, vol. 19, no. 3-4, 1 January 2008 (2008-01-01), pages 515-526, XP055970248, GB ISSN: 1546-4261, DOI: 10.1002/cav.237), further in view of U.S. PGPubs 2014/0270540 to Spector et al..
Regarding claim 9, TerKonda as modified by Kim et al. teaches all the limitation of claim 8, but keeps silent for teaching wherein at least one of the images comprise a reference object, the method comprising determining size information of the three dimensional model of the subject based on an apparent size of the reference object in the at least one image.
In related endeavor, Spector et al. further teach wherein at least one of the images comprise a reference object, the method comprising determining size information of the three dimensional model of the subject based on an apparent size of the reference object in the at least one image (par 0112-0114, par 0132, estimate size information of subject based on the reference object).
It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified TerKonda as modified by Kim et al. to include wherein at least one of the images comprise a reference object, the method comprising determining size information of the three dimensional model of the subject based on an apparent size of the reference object in the at least one image as taught by Kim et al. to accurately determine an actual dimension of a target object using a digital image of that object along with a reference object.
Regarding claim 10, TerKonda as modified by Kim et al. and Spector et al. teaches all the limitation of claim 9, and Spector et al. further teach wherein the reference object is positioned on the same plane as at least a portion of the subject's body in the image (Figs 11-14, par 0051, position reference object with subject’s body). This would be obvious for the same reason given in the rejection for claim 9.
Claim(s) 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2023/0371634 to TerKonda in view of Kim et al. (KIM YOUNGJUN ET AL: "3D virtual simulator for breast plastic surgery", COMPUTER ANIMATION AND VIRTUAL WORLDS, vol. 19, no. 3-4, 1 January 2008 (2008-01-01), pages 515-526, XP055970248, GB ISSN: 1546-4261, DOI: 10.1002/cav.237), further in view of U.S. PGPubs 2019/0117379 to Quiros et al..
Regarding claim 18, TerKonda as modified by Kim et al. teaches all the limitation of claim 16, but keeps silent for teaching wherein the method comprises: identifying a desired morphology of at least one of the subject's breasts; adjusting a virtual prosthesis so that, when the virtual prosthesis is positioned on the three-dimensional model to augment a virtual breast of the model, the augmented virtual breast has the desired morphology, optionally wherein the virtual garment (a) comprises the virtual prosthesis or (b) consists solely of the virtual prosthesis.
In related endeavor, Quiros et al. teach wherein the method comprises: identifying a desired morphology of at least one of the subject's breasts; adjusting a virtual prosthesis so that, when the virtual prosthesis is positioned on the three-dimensional model to augment a virtual breast of the model, the augmented virtual breast has the desired morphology, optionally wherein the virtual garment (a) comprises the virtual prosthesis or (b) consists solely of the virtual prosthesis (par 0126-0131, “the imaging system 100 may be used to simulate single or dual breast reconstruction surgeries, for example, to help in planning a surgical procedure. FIG. 18 is a flow chart that shows an exemplary method 400 for performing single and double breast topological optimization simulations. Single breast optimization is used, for example, when only one of the subject's breasts will be modified or reconstructed. In this case, the algorithm may optimize the topology of the breast that is being modified so that the reconstructed topology resembles that of the other (unmodified) breast. Double breast optimization may be used in case both breasts are to be reconstructed, modified, or augmented. In this case, the algorithm may attempt topographical symmetry between the left and right breasts ….In response to the user's input, the computer system 90 executes the Volumetric Breast Mirroring algorithm (step 415). This algorithm may perform computations to modify relevant parameters of the target breast (i.e., the left or right breast as chosen by the user in step 415) to the other breast. FIG. 19 is a listing of the parameters that may be used in the volumetric breast mirroring algorithm. Using simulations, the algorithm may modify some or all of these parameters of the target breast to match those of the other (unmodified) breast. The computer system 90 may then compute a proposed topology for the target breast, create digital 3D models/images of the proposed reconstructed breast (step 420), and present the results to the user (step 425). These results may include digital 3D models, dimensions, and other relevant parameters for the reconstructed breast, and represent the system's proposal for the reconstruction ….. Based on the user's selection, the computer system 90 executes the Dual Topological Optimization subroutine (step 465). Executing this subroutine may include running an algorithm that computes and creates a modified topography for the breasts with the dimensions of the breasts modified in accordance with the desired look (for e.g., by embedding the selected implant, and matching the dimensions of the two breasts). During these computations, the computer system 90 may access a database that includes predetermined optimal volumes, symmetry ratios, and other data associated with the implants. Results of the computations, which may include digital 3D models of the torso (and/or of each breast) and other relevant results (e.g., dimensions, etc.), may then be presented to the user (step 470)”).
It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified TerKonda as modified by Kim et al. to include wherein the method comprises: identifying a desired morphology of at least one of the subject's breasts; adjusting a virtual prosthesis so that, when the virtual prosthesis is positioned on the three-dimensional model to augment a virtual breast of the model, the augmented virtual breast has the desired morphology, optionally wherein the virtual garment (a) comprises the virtual prosthesis or (b) consists solely of the virtual prosthesis as taught by Quiros et al. to reconstruct 3d virtual image to represent the expected outcome of embedding an implant in the anatomical regions and display the modified three-dimensional image indicating an expected outcome of the implantation for useful for planning, simulating, and/or evaluating the outcome of cosmetic surgery, reconstructive surgery, and/or other medical procedures.
Regarding claim 19, TerKonda as modified by Kim et al. and Quiros et al. teaches all the limitation of claim 18, and Quiros et al. further teach wherein the desired morphology comprises a desired shape, and the adjusting the virtual prosthesis comprises deforming the virtual prosthesis while constraining a total volume of the virtual prosthesis (par 0126-0131, “In a simulation to assist in breast reconstruction surgery, image data of the subject's torso (from the scanner 10 or from the database) is used to reconstruct 3D images of the torso, and compute dimensions and breast tissue volume, similar to steps 305-360 of FIG. 15. The computer system 90 then prompts the user to select the type of breast topological optimization desired (i.e., single or double) (step 405). If the user selects “single,” the single breast topological optimization algorithm is activated (left leg of FIG. 18). The user is first prompted to identify the breast (i.e., left or right breast) which is to be optimized (step 410). In response to the user's input, the computer system 90 executes the Volumetric Breast Mirroring algorithm (step 415). This algorithm may perform computations to modify relevant parameters of the target breast (i.e., the left or right breast as chosen by the user in step 415) to the other breast. FIG. 19 is a listing of the parameters that may be used in the volumetric breast mirroring algorithm. Using simulations, the algorithm may modify some or all of these parameters of the target breast to match those of the other (unmodified) breast. The computer system 90 may then compute a proposed topology for the target breast, create digital 3D models/images of the proposed reconstructed breast (step 420), and present the results to the user (step 425). These results may include digital 3D models, dimensions, and other relevant parameters for the reconstructed breast, and represent the system's proposal for the reconstruction….. Based on the user's selection, the computer system 90 executes the Dual Topological Optimization subroutine (step 465). Executing this subroutine may include running an algorithm that computes and creates a modified topography for the breasts with the dimensions of the breasts modified in accordance with the desired look (for e.g., by embedding the selected implant, and matching the dimensions of the two breasts). During these computations, the computer system 90 may access a database that includes predetermined optimal volumes, symmetry ratios, and other data associated with the implants. Results of the computations, which may include digital 3D models of the torso (and/or of each breast) and other relevant results (e.g., dimensions, etc.), may then be presented to the user (step 470)”). This would be obvious for the same reason given in the rejection for claim 18.
Regarding claim 20, TerKonda as modified by Kim et al. and Quiros et al. teaches all the limitation of claim 18, and Quiros et al. further teach wherein the method comprises: identifying a first morphological parameter of a first one of the subject's breasts, such as volume, wherein the virtual prosthesis corresponds to the second one of the subject's breasts; and deforming the virtual prosthesis under a constraint that the first morphological parameter of the virtual prosthesis matches the first morphological parameter of the first one of the subject's breasts; and identifying a second morphological parameter of the adjusted virtual prosthesis (par 0126-0131, “In a simulation to assist in breast reconstruction surgery, image data of the subject's torso (from the scanner 10 or from the database) is used to reconstruct 3D images of the torso, and compute dimensions and breast tissue volume, similar to steps 305-360 of FIG. 15. The computer system 90 then prompts the user to select the type of breast topological optimization desired (i.e., single or double) (step 405). If the user selects “single,” the single breast topological optimization algorithm is activated (left leg of FIG. 18). The user is first prompted to identify the breast (i.e., left or right breast) which is to be optimized (step 410). In response to the user's input, the computer system 90 executes the Volumetric Breast Mirroring algorithm (step 415). This algorithm may perform computations to modify relevant parameters of the target breast (i.e., the left or right breast as chosen by the user in step 415) to the other breast. FIG. 19 is a listing of the parameters that may be used in the volumetric breast mirroring algorithm. Using simulations, the algorithm may modify some or all of these parameters of the target breast to match those of the other (unmodified) breast. The computer system 90 may then compute a proposed topology for the target breast, create digital 3D models/images of the proposed reconstructed breast (step 420), and present the results to the user (step 425). These results may include digital 3D models, dimensions, and other relevant parameters for the reconstructed breast, and represent the system's proposal for the reconstruction….. Based on the user's selection, the computer system 90 executes the Dual Topological Optimization subroutine (step 465). Executing this subroutine may include running an algorithm that computes and creates a modified topography for the breasts with the dimensions of the breasts modified in accordance with the desired look (for e.g., by embedding the selected implant, and matching the dimensions of the two breasts). During these computations, the computer system 90 may access a database that includes predetermined optimal volumes, symmetry ratios, and other data associated with the implants. Results of the computations, which may include digital 3D models of the torso (and/or of each breast) and other relevant results (e.g., dimensions, etc.), may then be presented to the user (step 470)”). This would be obvious for the same reason given in the rejection for claim 18.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jin Ge whose telephone number is (571)272-5556. The examiner can normally be reached 8:00 to 5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571)272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JIN . GE
Examiner
Art Unit 2619
/JIN GE/Primary Examiner, Art Unit 2619