DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claims 2, 11-16, 20 are objected to because of the following informalities:
"determining the one or more dimensions" should be "said determining the one or more dimensions" or "said automatically determining the one or more dimensions" [Claims 2, 11-13, all line 1 except for claim 2];
"generating the model" should be "said generating the model" [Claims 2, 14, all line 1];
"applying the lattice structure" should be "said applying the lattice structure" [Claims 2, 15, 16, all line 1];
"wherein generating" should be "wherein said generating" [Claim 20, line 1].
Appropriate correction is required. Further, in an effort to practice compact prosecution, each of these limitations has been interpreted similarly as in the provided recommendation for each limitation, above.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claim 17 is rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
Claim 17 recites the limitation "unit cell of the" in line 3. This portion of the limitation is unclear because the remaining part of this limitation has been omitted. For this reason, these claims fail to particularly point out and distinctly define the metes and bounds of the subject matter to be protected by the patent grant (MPEP 2171).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Aggarwal (US 2017/0190121).
Regarding claim 1, Aggarwal discloses:
A method of generating a custom insole structure, the method comprising: receiving one or more images of a user’s foot ([0035] In step 204, the mobile device provides instructions to the user to operate the camera to capture images of the user (or more precisely, a body part of the user) in a manner which collects the data necessary to provide a customized wearable for that user, [0039]);
analyzing the one or more images of the user’s foot to identify one or more points of interest of the user’s foot ([0110] In step 1308, the system identifies regions of interest in the images. This is explained in FIG. 12 at step 1214 [0111] in step 1310, given information from the IMU the system calculates the parallax angle between where the first image was captured and the second image. In step 1312, the system calculates distance to the region or point of interest based on the parallax angles and distances between the first and second position. In step 1514, the system is able to use geometric math to solve for the distance between a number of distances within each image. These distances are used to provide coordinates to a number of points in images, and then later used to develop 3-D models of objects within the images);
automatically determining one or more dimensions of the user’s foot based on the identified one or more points of interest of the user’s foot ([0035] In step 206, the mobile device transmits the collected image data to the processing server 24. In step 208, the processing server performs computer vision operations on the image data in order to determine the size, shape, and curvature of the user (or body part of the user), where applicable to the chosen product type [0058]-[0059]);
generating a model of the custom insole structure based at least in part on the determined one or more dimensions of the user’s foot ([0064] FIG. 7 is a flowchart illustrating a process performed by the customization engine for customizing tessellation models. In step 702, the user's selected wearable type and subclass are used to narrow the selection of tessellation model kits from a large number of provided options to a select group. For example, if the wearable type is a shoe insole, all other wearable type tessellation model kits are eliminated from the given printing task process. [0065] In step 704, the computer vision data, including the size and curvature specifications, is imported into the customization engine. In some embodiments, the vision data is roughly categorized, thereby eliminating irrelevant tessellation model kits. Remaining is a single, determined model kit, which most closely resembles the size and curvature specifications. In some embodiments, the tessellation model is built from the ground up on the fly based on the observations in image processing, computer vision and machine learning. [0066] In step 706, the size and curvature specifications are applied to the determined model kit. In doing so, predetermined vertices of the determined model kit are altered using graph coordinates obtained from the computer vision operations. [0067] In step 708, other adjustments can be made to prepare the tessellation model for printing. The other adjustments can be either ornamental and/or functional but are generally unconnected to the measurements of the user obtained through computer vision operations. One possible technique for adjusting a tessellation model uses so-called “negative normals.” [0068]-[0071] creating and adjusting tessellation file);
and applying a lattice structure to the model of the custom insole based at least in part on the determined one or more dimensions of the user’s foot ([0036]-[0037] generating tessellation model [0065]-[0067] generating tessellation model [0068]-[0071] creating and adjusting tessellation file including a lattice structure);
and generating an additive manufacturing file base on the applied lattice structure to the model of the custom insole ([0038] In step 212, the processing server forwards the customized tessellation model to the 3D printer [0068] tessellation file that can be used in generating 3D printed wearables).
As per claim 2, claim 1 is incorporated, Aggarwal further discloses:
further comprising receiving user input and wherein one or more of determining the one or more dimensions of the user’s foot, generating the model of the custom insole structure and applying the lattice structure to the model of the custom insole is based at least in part on the received user input ([0039] user selection for wearable type and subclass [0062] user manual edits [0093] input body type [0099] input body part image data [0052] determine the length and width of the foot (at more than one location) based on input images of users foot).
As per claim 3, claim 2 is incorporated, Aggarwal further discloses:
wherein the user input comprises data regarding a user characteristic including one or more of a shoe size of the user, a weight of the user, and a height of the user ([0035] performs computer vision operations on the (user input) image data in order to determine the size, shape, and curvature of the user (or body part of the user), where applicable to the chosen product type [0039] user selection for wearable type and subclass [0062] user manual edits [0093] input body type [0099] input body part image data [0052] determine the length and width of the foot (at more than one location) based on input images of users foot).).
As per claim 4, claim 2 is incorporated, Aggarwal further discloses:
wherein the user input comprises data regarding a use case of the user including one or more of an activity of the user, a shoe type of the user, historical injury information of the user, and a pain point of the user ([0033] In step 202, the mobile device accepts input from a user through the user interface concerning the selection of the type of wearable the user wants to purchase. In some embodiments, the mobile device uses a mobile application, or an application program interface (“API”) that include an appropriate user interface and enable communication between the mobile device and external web servers...In addition to these examples, the product may be drilled down further into subclasses of wearables. Among shoe insoles, for example, there can be dress shoe, athletic shoe, walking shoe, and other suitable insoles known in the art. Each subclass of wearable can have construction variations [0039] user selection for wearable type and subclass).
As per claim 5, claim 1 is incorporated, Aggarwal further discloses:
wherein receiving one or more images of a user’s foot includes providing prompts to instruct a user to capture images of their foot ([0039]-[0040] step 306).
As per claim 6, claim 1 is incorporated, Aggarwal further discloses:
further comprising determining a usability of the received one or more images of a user’s foot based on an analysis of the received one or more images of a user’s foot ([0035], [0042]-[0043]).
As per claim 7, claim 6 is incorporated, Aggarwal further discloses:
further comprising providing instruction to the user regarding capturing images of the user’s foot based on determining that the usability of the received one or more images of a user’s foot is not usable ([0042]-[0043]).
As per claim 8, claim 1 is incorporated, Aggarwal further discloses:
wherein receiving one or more images of a user’s foot comprises receiving a series of captured images and automatically selecting one of the series of captured images based at least in part on analysis thereof (([0042]-[0043]).
As per claim 9, claim 1 is incorporated, Aggarwal further discloses:
wherein analyzing the one or more images of the user’s foot to identify or more points of interest of the user’s foot is performed automatically using a machine-learning model that is trained to identify the one or more points of interest ([0052] during image processing, computer vision, and machine learning operations on the processing server, the top down images are used to determine the length and width of the foot (at more than one location). Example locations for determining length and width include heel to big toe, heel to little toe, joint of big toe horizontally across, and distance from either side of the first to fifth metatarsal bones. An additional detail collected from the top down view is the skin tone of the user's foot [0110] In step 1308, the system identifies regions of interest in the images. This is explained in FIG. 12 at step 1214).
As per claim 10, claim 9 is incorporated, Aggarwal further discloses:
wherein the one or more points of interest comprise one or more of a first and second extant point of the user’s foot, a position of an arch of the user’s foot, a position of a ball of a user’s foot, and a position of a heel of a user’s foot ([0052] In step 404, the mobile device captures images of the user's foot from the top down...Example locations for determining length and width include heel to big toe, heel to little toe, joint of big toe horizontally across, and distance from either side of the first to fifth metatarsal bones. An additional detail collected from the top down view is the skin tone of the user's foot [0059] In step 504, the server application software analyzes the image data to determine distances between known points or objects on the subject's body part. Example distances include heel to big toe, heel to little toe, joint of big toe horizontally across, and distance from either side of the first to fifth metatarsal bones.).
As per claim 11, claim 1 is incorporated, Aggarwal further discloses:
wherein automatically determining one or more dimensions of the user’s foot based on the identified one or more points of interest of the user’s foot is performed using a machine learning model that is trained to generate at least a measurement of a user’s foot based on the identified one or more points of interest of the user’s foot ([0052] Later, during image processing, computer vision, and machine learning operations on the processing server, the top down images are used to determine the length and width of the foot (at more than one location). Example locations for determining length and width include heel to big toe, heel to little toe, joint of big toe horizontally across, and distance from either side of the first to fifth metatarsal bones. An additional detail collected from the top down view is the skin tone of the user's foot [0110] In step 1308, the system identifies regions of interest in the images. This is explained in FIG. 12 at step 1214 [0111] In step 1312, the system calculates distance to the region or point of interest based on the parallax angles and distances between the first and second position.).
As per claim 12, claim 1 is incorporated, Aggarwal further discloses:
wherein automatically determining one or more dimensions of the user’s foot based on the identified one or more points of interest of the user’s foot is based at least in part on predetermined relationship between a first characteristic of a user’s foot and a length of a user’s foot ([0035], [0052] the top down images are used to determine the length and width of the foot (at more than one location). Example locations for determining length and width include heel to big toe, heel to little toe, joint of big toe horizontally across, and distance from either side of the first to fifth metatarsal bones, [0058]-[0059], [0111]).
As per claim 13, claim 1 is incorporated, Aggarwal further discloses:
wherein automatically determining one or more dimensions of the user’s foot based on the identified one or more points of interest of the user’s foot is based at least in part on analysis of a received one or more images of the user’s foot ([0035] In step 206, the mobile device transmits the collected image data to the processing server 24. In step 208, the processing server performs computer vision operations on the image data in order to determine the size, shape, and curvature of the user (or body part of the user), where applicable to the chosen product type [0058]-[0059]).
As per claim 14, claim 2 is incorporated, Aggarwal further discloses:
wherein generating the model of the custom insole is further based at least in part on the received user input ([0039] user selection for wearable type and subclass [0062] user manual edits [0093] input body type [0099] input body part image data [0052] determine the length and width of the foot (at more than one location) based on input images of users foot).
As per claim 15, claim 2 is incorporated, Aggarwal further discloses:
wherein applying the lattice structure to the model of the custom insole is further based at least in part on the received user input ([0064] FIG. 7 is a flowchart illustrating a process performed by the customization engine for customizing tessellation models. In step 702, the user's selected wearable type and subclass are used to narrow the selection of tessellation model kits from a large number of provided options to a select group).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Aggarwal (US 2017/0190121) in view of Zou (WO 2021/169804).
As per claim 16, claim 1 is incorporated, Aggarwal fails to disclose “wherein applying the lattice structure to the model of the custom insole includes one or more of defining a functional zone and positioning of the functional zone”
However, Zou teaches the above limitations ([Pgs. 8-11], [Pgs. 30-35] generating and adjusting of lattice structures).
Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate the teaching of Zou into the teaching of Aggarwal because the references similarly disclose shoe-related design and/or manufacturing. Consequently, one of ordinary skill in the art would be motivated to further modify the system as in Aggarwal to further include the specific of the lattice structure as in Zou so that “the overall order of the topological structure is increased” and at least to “increase the elastic deformation ability of the shoe midsole” (Zou, [Pg. 8]).
As per claim 17, claim 16 is incorporated, Zou further discloses:
further comprising at least one of selecting a unit cell of the lattice structure from a predetermined plurality of unit cells based on the functional zone or creating a unit cell of the ([Pg. 7] the basic unit structure composing the midsole of the shoe is a spatial connecting rod of a certain shape, and the positional relationship between the connecting rods can be represented by the positional relationship of the connecting bonds between the atoms of the unit cell unit in the crystal. form. Of course, the basic unit form of the lattice structure of the shoe midsole is not limited by the connection form of the actual unit cell, but a structural form with spatially oriented connection bonds between the unit cell atoms. [Pgs. 8-11], [Pgs. 30-35] generating and adjusting of lattice structures).
As per claim 18, claim 16 is incorporated, Zou further discloses:
wherein at least one characteristic of a first unit cell of the functional zone varies from a second unit cell of the lattice structure that is not part of the functional zone ([Pgs. 8-11], [Pg. 20] intervention area [Pgs. 30-35] generating and adjusting of lattice structures).
As per claim 19, claim 18 is incorporated, Zou further discloses:
wherein the at least one characteristic of a first unit cell comprises one or more of a dimension of the unit cell, a dimension of one or more members of the unit cell, and a geometry of the members of the unit cell ([Pgs. 7-11], [Pgs. 30-35] generating and adjusting of lattice structures).
As per claim 20, claim 1 is incorporated, Zou further discloses:
wherein generating the additive manufacturing file base on the applied lattice structure to the model of the custom insole comprises generating additive manufacturing device instructions based on the lattice structure and at least one of applying supports to the lattice structure or orienting the lattice structure for the additive manufacturing process ([Pg. 17] the gait data is related to the physical function of the target user. For example, the elderly usually have a lower walking speed and a smaller stride length, and the time to stand on the bottom surface supported by both feet during walking becomes longer. The strength of the topological structure or the lattice structure is related to the sense of touch of the human body during wearing, and the strength includes stiffness or hardness. In the analysis of gait data, for the target user with a longer biped support period, the topological The structural strength or lattice structure strength is set to have higher toughness and lower hardness [Pg. 25] For example, for a shoe midsole designed to adjust the pressure distribution through the target area and the intervention area, in some embodiments, the waist portion of the shoe midsole of the present application has a raised portion with a preset height to support the target user's foot [Pg. 26] the structural model and performance parameters such as intensity corresponding to the topological structure or lattice structure of the shoe midsole are input to the control device of the 3D printing device [Pg. 36] In step S140, three-dimensional slice data of the shoe midsole readable by the 3D printing device is formed).
Pertinent Prior Art
The following prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Cavanagh (US 2006/0070260) discloses design and manufacture of insoles;
Mukumoto (US 2009/0247909) discloses shoe or insole fitting navigation system;
Shaffeeullah (US 2002/0138923) discloses producing individually contoured shoe insert;
Peterson (US 2006/0283243) discloses manufacturing custom orthotic footbeds;
Schwartz (US 2018/0228401) discloses producing a foot orthotic through 3d printing using foot pressure measurements and material hardness and/or structure to unload foot pressure;
Holan (US 12,408,730) discloses personalized shoe insoles.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM P BARTLETT whose telephone number is (469)295-9085. The examiner can normally be reached on M-Th 11:30-8:30, F 11-3.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sherief Badawi can be reached on 571-272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WILLIAM P BARTLETT/
Primary Examiner, Art Unit 2169