DETAILED ACTION
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on Aug. 25th, 2025 has been entered.
This action is in response to the amendments filed on Aug. 25th, 2025. A summary of this action:
Claims 1-20 have been presented for examination.
Claim 2-15, 17-18, 20 are objected to because of informalities
Claims 2, 7, 17, and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of both a mathematical concept and mental process without significantly more.
The claims are not rejected under § 102/103. The closest reference is Pascual, Alejandro, et al. "A RE methodology to achieve accurate polygon models and NURBS surfaces by applying different data processing techniques." Metals 10.11 (2020): 1508. See the abstract, fig. 1 and fig. 4, and §§ 2.2-2.2.4, and the clarification on this technique in § 3. However, Pascual, when taken in view of the other pertinent prior art of record, does not fairly teach the particular ordered combination of all features of the presently claimed invention.
Next closest reference is Sanchez Bermudez et al., US 2020/0210845, abstract and fig. 12. See ¶ 47, ¶ 68, ¶ 93, ¶ 224. Again, it does not teach the presently claimed order of features, nor would it have been obvious in view of the other art of record without the use of impermissible hindsight to cobble together an amalgamation of numerous references. Thus, neither alone or in combination does any combination of references fairly teach the particular subject matter in ordered combination presently claimed.
This action is non-final
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments/Amendments
Regarding the § 112(b)
In view of the amendments, these are withdrawn
Regarding the § 101 Rejection
Maintained, updated as necessitated by amendment.
With respect to the remarks, regarding the newly added limitations (§ II.A of the remarks), the Examiner respectfully disagrees. See the detailed updated rejection below for clarification on how the amended claims have been analyzed.
And to clarify, the Examiner did not simply merely say it is to “apply it” on a computer, as detailed below.
With respect to remarks § II.B, the Examiner respectfully disagrees. See below for how the claimed invention is now analyzed in view of the amendments.
In particular, the Examiner notes that in contrast to McRo, the main result of this is a math relationship/equation in the form of a NURBS spline with the values of its variables (control points) being set/fitted, i.e. this application is not directed to simply a manner of automating a manual process that was previously only able to be performed subjectively (McRo) by doing it in an entirely different way (McRo; contrasted with FairWarning IP, LLC v. Iatric Sys., Inc., 839 F.3d 1089, 120 USPQ2d 1293 (Fed. Cir. 2016) which provides a short summary of McRo, and also had “rules” per MPEP § 2106.04(a)(2)(III)(C)).
Furthermore, just merely “using RL” or other forms of generic machine learning (without improving machine learning technology itself; and without claiming the math of machine learning itself) is not sufficient. See Recentive Analytics, per the Aug. 2025 memorandum, footnote 17. See the Examiner’s suggested amendment below to clarify.
With respect to remarks § II.C, see the rejection below, and see the evidence below on the additional elements. These remarks do not even address the prior evidence of record, but rather merely only point to their own specification.
To summarize, this claim, at present, merely invokes a conventional machine learning pipeline for object recognition (the identifying step); generically uses a “GAN” to do a math concept (the orthographic views), wherein the math concept itself of projecting from 3D to 2D using such a projection has been mentally performed for centuries, if not millennia (i.e. also a mental process), followed by the math concept with the spline, but do it in a computer environment with the generic recitation of “CAD”, akin to the XML in MPEP § 2106.05(f and h), followed by simply outputting the data and using it in one or more applications generically recited (akin to In re Brown, as discussed below, of both a token post-solution activity and mere instructions to “apply it” wherein this was conventional).
See the Examiner’s suggested amendment to clarify on what would be unconventional in view of the evidence of record to contrast.
Claim Objections
Claim 2-15, 17-18, 20 are objected to because of the following informalities:
The claims have numerous issues with antecedent basis, resultant from substantial amendment to the independent claims but no corresponding amendments to the dependent claims. The Examiner suggests amending the claims such that the first recitation of each distinct element uses articles such as “a”/”an”, later recitations referring back to the same distinct element uses articles such as “the”/”said”, to use disambiguating modifiers (e.g., first, second, etc.) when there are multiple distinct elements with the same base term, and that the use of modifiers for each distinct element is kept consistent. Below is a non-exhaustive list of examples of these issues:
Claim 1 now recites a training step of the classification engine, but claim 3 does not refer back to and further limit it expressly.
Claim 4 recites: “manufacturing constraint criteria” but claim 1 already recites this constraint, and see ¶ 110: “pre-defined success criteria (e.g., footwear manufacturing constraints of the apply constraints engine 624).”, i.e. it appears to be the same element in view of the disclosure. Examiner suggests more clearly further limiting what is found in the independent claim
Claim 6 – control points are recited in the independent now
Claim 8 – see claim 1 for its recitation. Examiner suggests ensuring the preamble is also corrected for what claim it depends upon, should claim 7 be cancelled as suggested below for the § 112(b) rejection
Claim 9 – see the independent claims recitation, note the preamble needs to also be amended once claims 7 is canceled in view of the § 112(b) below and the recitation in the independent of the subject matter of claim 7
Claim 10 – see the independent claims, as several elements of claim 10 are now found in the independent
Claim 12 – similar story as above, see independent claims
Appropriate correction is required.
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2, 7, 17, and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The Examiner notes the substantial amendments to the independent claim have caused indefiniteness issues of the dependents which were not amended.
The following dependent claims are indefinite, because their subject matter is found either expressly or inherently in the independent claim, and the dependent claim does not refer back to it, nor further limit.
Dependent claim 2, 17, and 20 – the plain meaning of spline is a math function based on control points (see clarifying definitions below), and the independents recite all of the other subject matter in this claim now.
Dependent claim 7 – a narrower form of this is recited in the independents, and the dependent does not even refer back to this.
The Examiner suggests deleting the above noted dependent claims.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of both a mathematical concept and mental process without significantly more.
Suggested amendments to address the rejection
As an initial matter, as was discussed during the interviews, the Examiner suggests (after further consideration) amending the claims in the following manner to address this rejection:
Incorporate the subject matter in ¶ 101 of: “The instant disclosure solves the reducing search space challenge by first classifying a first level of bounding boxes then drilling down into each bounding box of the first level to refine identifications. For example, it is easier to find the midsole of the footwear than to identify the heel counter, the biteline, the toe counter, etc.” – this would be an unconventional improvement to technology.
To clarify, § 101 is a threshold test, and § 102/103 of no relevance to § 101. MPEP § 2106(I): “As the Supreme Court made clear in Bilski, 561 U.S. at 602, 95 USPQ2d at 1006...The § 101 patent-eligibility inquiry is only a threshold test.”
MPEP § 2106.05(I): “See also Alice Corp., 573 U.S. at 21-18, 110 USPQ2d at 1981 (citing Mayo, 566 U.S. at 78, 101 USPQ2d at 1968 (after determining that a claim is directed to a judicial exception, "we then ask, ‘[w]hat else is there in the claims before us?") (emphasis added)); RecogniCorp, LLC v. Nintendo Co., 855 F.3d 1322, 1327, 122 USPQ2d 1377 (Fed. Cir. 2017) ("Adding one abstract idea (math) to another abstract idea (encoding and decoding) does not render the claim non-abstract"). Instead, an "inventive concept" is furnished by an element or combination of elements that is recited in the claim in addition to (beyond) the judicial exception, and is sufficient to ensure that the claim as a whole amounts to significantly more than the judicial exception itself. Alice Corp., 573 U.S. at 27-18, 110 USPQ2d at 1981 (citing Mayo, 566 U.S. at 72-73, 101 USPQ2d at 1966)… As made clear by the courts, the "‘novelty’ of any element or steps in a process, or even of the process itself, is of no relevance in determining whether the subject matter of a claim falls within the § 101 categories of possibly patentable subject matter." Intellectual Ventures I v. Symantec Corp., 838 F.3d 1307, 1315, 120 USPQ2d 1353, 1358 (Fed. Cir. 2016) (quoting Diamond v. Diehr, 450 U.S. at 188–89, 209 USPQ at 9)…. See, e.g., BASCOM Global Internet v. AT&T Mobility LLC, 827 F.3d 1341, 1350, 119 USPQ2d 1236, 1242 (Fed. Cir. 2016) ("The inventive concept inquiry requires more than recognizing that each claim element, by itself, was known in the art. . . . [A]n inventive concept can be found in the non-conventional and non-generic arrangement of known, conventional pieces."). Specifically, lack of novelty under 35 U.S.C. 102 or obviousness under 35 U.S.C. 103 of a claimed invention does not necessarily indicate that additional elements are well-understood, routine, conventional elements.”
So thus, while the below references demonstrate that it may have been obvious to do what is in ¶ 101 of the instant specification, they do not demonstrate that it was routine nor conventional, but rather describe it as the thrust of their own advance in the technology of image recognition (i.e. it was unconventional):
Bellver, Miriam, et al. "Hierarchical object detection with deep reinforcement learning." arXiv preprint arXiv:1611.03718 (2016). § 1 ¶¶ 2-3
König, Jonas, et al. "Multi-stage reinforcement learning for object detection." Science and Information Conference. Cham: Springer International Publishing, 2019.Abstract, § 2 ¶¶¶ 2-3, and § 3.12 also see fig. 1. See fig. 2 as well.
Vaca-Castano, Gonzalo, Niels DaVitoria Lobo, and Mubarak Shah. "Holistic object detection and image understanding." Computer Vision and Image Understanding 181 (2019): 1-13. See the abstract, § 2 last paragraph: “Hence, in this paper, we propose a novel purely visual hierarchical representation that allows modeling the image content, and learning about the image structure from datasets with massive amount of images.” And § 3 ¶ 2- to clarify, page 4, col. 1, ¶ 2 and fig. 3, and fig. 4
To further clarify, see the WURC evidence for what was conventional to contrast with ¶ 101 and the references here.
In addition, the Examiner also suggests the following:
1) Independent claims recite:
Independent claims recite: “a three-dimensional (3D) computer-aided design (CAD) digital asset comprising a non-uniform rational basis spline (NURBS) model of the footwear, wherein the CAD digital asset comprises a spline that models the one or more visual components”
The Examiner suggests limiting the spline to be the NURBS element. As of right now, the claim recites these as distinct elements, and thus is not expressly limited to fitting a NURBs, but rather any spline.
2) Should ¶ 101 be incorporated, the Examiner notes that dependent claim 13 appears to recite a much broader form of this. As such, the Examiner suggests cancelling this.
3) The Examiner suggests, given the above suggestions of cancelling some of the dependent claims, to add in new claims with the particular subject matter of ¶ 102. See Research Corp. in MPEP § 2106.04(a)(2)(III)(A) to clarify on the Examiners suggestion.
4) If further subject matter is to be considered to be added in dependent form, see fig. 6 # 648-650; and 644, and accompanying description, similarly see 632-636, as this subject matter does not appear to be present in the claims.
Step 1
Claim 1 is directed towards the statutory category of a process.
Claim 16 is directed towards the statutory category of an article of manufacture.
Claim 19 is directed towards the statutory category of an apparatus.
Claims 16 and 19, and the dependents thereof, are rejected under a similar rationale as representative claim 1, and the dependents thereof.
Claim interpretation
The claims are given their broadest reasonable interpretation by a person of ordinary skill in the art. See Phillips v. AWH Corp., 415 F.3d 1303, 1316, 75 USPQ2d 1321, 1329 (Fed. Cir. 2005) as discussed in MPEP § 2111; also see in MPEP § 2111: “Because applicant has the opportunity to amend the claims during prosecution, giving a claim its broadest reasonable interpretation will reduce the possibility that the claim, once issued, will be interpreted more broadly than is justified. In re Yamamoto, 740 F.2d 1569, 1571 (Fed. Cir. 1984); In re Zletz, 893 F.2d 319, 321, 13 USPQ2d 1320, 1322 (Fed. Cir. 1989) ("During patent examination the pending claims must be interpreted as broadly as their terms reasonably allow.”) … Further, the broadest reasonable interpretation of the claims must be consistent with the interpretation that those skilled in the art would reach.”
MPEP § 2111.01(I): “Under a broadest reasonable interpretation (BRI), words of the claim must be given their plain meaning, unless such meaning is inconsistent with the specification. The plain meaning of a term means the ordinary and customary meaning given to the term by those of ordinary skill in the art at the relevant time. The ordinary and customary meaning of a term may be evidenced by a variety of sources, including the words of the claims themselves, the specification, drawings, and prior art. However, the best source for determining the meaning of a claim term is the specification - the greatest clarity is obtained when the specification serves as a glossary for the claim terms. Phillips v. AWH Corp., 415 F.3d 1303, 1315, 75 USPQ2d 1321, 1327 (Fed. Cir. 2005) (en banc) ("[T]he specification ‘is always highly relevant to the claim construction analysis. Usually, it is dispositive; it is the single best guide to the meaning of a disputed term.’" (quoting Vitronics Corp. v. Conceptronic Inc., 90 F.3d 1576, 1582 (Fed. Cir. 1996)).”
MPEP § 2111.01(III): “"[T]he ordinary and customary meaning of a claim term is the meaning that the term would have to a person of ordinary skill in the art in question at the time of the invention, i.e., as of the effective filing date of the patent application." Phillips v. AWH Corp.,415 F.3d 1303, 1313, 75 USPQ2d 1321, 1326 (Fed. Cir. 2005) (en banc); Sunrace Roots Enter. Co. v. SRAM Corp., 336 F.3d 1298, 1302, 67 USPQ2d 1438, 1441 (Fed. Cir. 2003); Brookhill-Wilk 1, LLC v. Intuitive Surgical, Inc., 334 F.3d 1294, 1298, 67 USPQ2d 1132, 1136 (Fed. Cir. 2003) ("In the absence of an express intent to impart a novel meaning to the claim terms, the words are presumed to take on the ordinary and customary meanings attributed to them by those of ordinary skill in the art.")… Phillips v. AWH Corp., 415 F.3d 1303, 1317, 75 USPQ2d 1321, 1329 (Fed. Cir. 2005) ("Although we have emphasized the importance of intrinsic evidence in claim construction, we have also authorized district courts to rely on extrinsic evidence, which "consists of all evidence external to the patent and prosecution history, including expert and inventor testimony, dictionaries, and learned treatises.")… Any meaning of a claim term taken from the prior art must be consistent with the use of the claim term in the specification and drawings. Moreover, when the specification is clear about the scope and content of a claim term, there is no need to turn to extrinsic evidence for claim interpretation.”
As part of properly determining the broadest reasonable interpretation, one must first determine who is a person of ordinary skill in the art. MPEP § 2141.03(I): “The person of ordinary skill in the art is a hypothetical person who is presumed to have known the relevant art at the relevant time. Factors that may be considered in determining the level of ordinary skill in the art may include: (A) "type of problems encountered in the art;" (B) "prior art solutions to those problems;" (C) "rapidity with which innovations are made;" (D) "sophistication of the technology; and" (E) "educational level of active workers in the field. In a given case, every factor may not be present, and one or more factors may predominate." In re GPAC, 57 F.3d 1573, 1579, 35 USPQ2d 1116, 1121 (Fed. Cir. 1995); Custom Accessories, Inc. v. Jeffrey-Allan Indus., Inc., 807 F.2d 955, 962, 1 USPQ2d 1196, 1201 (Fed. Cir. 1986); Environmental Designs, Ltd. V. Union Oil Co., 713 F.2d 693, 696, 218 USPQ 865, 868 (Fed. Cir. 1983)…. "A person of ordinary skill in the art is also a person of ordinary creativity, not an automaton." KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398, 421, 82 USPQ2d 1385, 1397 (2007)… The level of disclosure in the specification of the application under examination or in relevant references may also be informative of the knowledge and skills of a person of ordinary skill in the art. For example, if the specification is entirely silent on how a certain step or function is achieved, that silence may suggest that figuring out how to achieve that step or function is within the ordinary skill in the art, provided that the specification complies with 35 U.S.C. 112. References which are not prior art may be relied upon to demonstrate the level of ordinary skill in the art at or around the relevant time. See In re Merck & Co., Inc., 800 F.2d 1091, 1098, 231 USPQ 375, 380 (Fed. Cir. 1986) ("Evidence of contemporaneous invention is probative of ‘the level of knowledge in the art at the time the invention was made.’"…”
MPEP § 2143.01(II): “If the only facts of record pertaining to the level of skill in the art are found within the prior art of record, the court has held that an invention may be held to have been obvious without a specific finding of a particular level of skill where the prior art itself reflects an appropriate level. Chore-Time Equipment, Inc. v. Cumberland Corp., 713 F.2d 774, 218 USPQ 673 (Fed. Cir. 1983). See also Okajima v. Bourdeau, 261 F.3d 1350, 1355, 59 USPQ2d 1795, 1797 (Fed. Cir. 2001).”
Should further clarification be sought on this level of skill determination during Examination, see (informative) Ex parte Jud, PTAB Appeal No. 2006-1061, available here: https://www.uspto.gov/patents/ptab/precedential-informative-decisions
Instant independent claims recite the term “orthographic”. See ¶ 86, see in particular its by an “orthographic projection”.
Plain meaning over time for this term:
Webster, Noah. “An American Dictionary of the English language”. Volume II. 1828. Accessed via the Internet Archive, contributed by the University of California Libraries. URL: archive(dot)org/details/americandictiona02websrich/page/218/mode/2up. Page 218
PNG
media_image1.png
312
389
media_image1.png
Greyscale
PNG
media_image2.png
385
421
media_image2.png
Greyscale
And still having the same meaning today.
Merriam-Webster Dictionary. “Orthographic projection”, merriam-webster(dot)com/dictionary/orthographic%20projection. Accessed 13 Feb. 2026. “projection of a single view of an object (such as a view of the front) onto a drawing surface in which the lines of projection are perpendicular to the drawing surface”. See section “Time Traveler”: “The first known use of orthographic projection was in 1668”
PNG
media_image3.png
200
400
media_image3.png
Greyscale
To clarify, see the definition of “projection”:
Merriam-Webster Dictionary. “Projection”. merriam-webster.com/dictionary/projection. Accessed 13 Feb. “1. a: a systematic presentation of intersecting coordinate lines on a flat surface upon which features from a curved surface (as of the earth or the celestial sphere) may be mapped b: the process or technique of reproducing a spatial object upon a plane or curved surface or a line by projecting its points also: a graph or figure so formed”
To further clarify, for the equations of the math of this, see:
Weisstein, Eric W. "Orthographic Projection." From MathWorld--A Wolfram Resource. mathworld(dot)wolfram(dot)com/OrthographicProjection(dot)html. Accessed Feb. 13th, 2026.
PNG
media_image4.png
711
936
media_image4.png
Greyscale
Should additional clarification be sought, on the plain meaning consistent with the instant disclosure, see Snyder, J. P. (1987). Map Projections—A Working Manual (US Geologic Survey Professional Paper 1395). Washington, D.C.: US Government Printing Office. pp. 145–153. URL: pubs(dot)usgs(dot)gov/pp/1395/report(dot)pdf. Do note its history: “The Egyptians were probably aware of the Orthographic projection, and Hipparchus of Greece (2nd century B.C.) used the equatorial aspect for astronomical calculations. Its early name was "analemma," a name also used by Ptolemy, but it was replaced by "orthographic" in 1613 by Fran<;ois d' Aiguillon of Antwerp. While it was also used by Indians and Arabs for astronomical purposes, it is not known to have been used for world maps older than 16th-century works by Albrecht Durer (1471-1528), the German artist and cartographer, who prepared polar and equatorial versions (Keuning, 1955, p. 6).” And the section “GEOMETRIC CONSTRUCTION”, followed by section “FORMULAS FOR THE SPHERE”
Step 2A – Prong 1
The claims recite an abstract idea of both a mental process and mathematical concept.
See MPEP § 2106.04: “...In other claims, multiple abstract ideas, which may fall in the same or different groupings, or multiple laws of nature may be recited. In these cases, examiners should not parse the claim. For example, in a claim that includes a series of steps that recite mental steps as well as a mathematical calculation, an examiner should identify the claim as reciting both a mental process and a mathematical concept for Step 2A Prong One to make the analysis clear on the record.”
To clarify, see the USPTO 101 training examples, available at https://www.uspto.gov/patents/laws/examination-policy/subject-matter-eligibility.
The mathematical concept recited in claim 1 is:
preprocessing the input image by transforming the input image into one or more two-dimensional (2D) orthographic views, the preprocessing comprising generating an orthographic view not included in the input image using a generative adversarial network (GAN) engine trained on one or more footwear component structures to generate the orthographic view; - a math concept of math calculations in textual form, but for the mere instructions to automate them in a computer environment by using a “GAN” (¶ 86 of the instant disclosure; in particular note “or any other engine…or, alternatively, manually…”). See above for the plain meaning of the term “orthographic” and the associated phase “orthographic projection” as used to get the resulting claimed “orthographic view”.
based at least in part on the identifying of the one or more visual components of the footwear, automatically generating, via a reinforcement learning engine, a three-dimensional (3D) computer-aided design (CAD) digital asset comprising a non-uniform rational basis spline (NURBS) model of the footwear, wherein the CAD digital asset comprises a spline that models the one or more visual components, wherein generating the 3D CAD digital asset comprises; iteratively transforming, via a reward function of the trained reinforcement learning engine, one or more control points of the spline until the spline is within a predefined average distance from the one or more visual components and the one or more visual components incorporate at least one of a design or manufacturing constraint; - mathematical relationships/equations in textual form in the mathematical field of geometry, but for the mere instructions to do them with a computer/in a computer environment
To clarify on the BRI, ¶ 21: “As used herein, a non-uniform rational basis spline (NURBS) models may include mathematical representations of 2D or 3D objects, which can be standard shapes (such as a cone) or free-form shapes (such as a car).”; ¶¶ 22-23: “…The B-spline is based (the B stands for basis) on four (or another number of) local functions or control points that lie outside the curve itself… A spline may be defined using control points [variables] and a mathematical function based on the control points. A graphical representation of the spline may be mathematically computed based on the control points. The control points determine the shape of a curve corresponding to the mathematical function…” – also, ¶ 78: “In some cases, NURBS may include much more information than mesh since NURBS is essentially a mathematical model of the geometry…”
¶ 120: “An example function of the RSS is used as the reward function in the reinforcement learning algorithm.” – equation 1.
In other words, this is merely claiming the math itself of the reinforcement learning operation “to mean that the root sum square (RSS) of the difference between the points of the target curve and the spline approaches zero, per Equation 1” (¶ 120), i.e. to adjust the values of variables of the control points in the math equations/relationships of a spline, one simply uses a pre-trained off-the-shelf reinforcement engine to do the math calculations of fitting the curve of spline.
I.e. suppose the spline is a simple spline for a straight line, in the form of y =mx+b; but with control points now to define it, i.e. line = “math function” of (control point 1, control point 2) – and this is simply adjusting the control point values (e.g. x-y coordinates, as this is in 2D orthographic views) by calculating eq. 1 until the error gets low enough. Note eq. 1 also does not even mention “y”, but rather only “x” which presumably is referring to the x-axis (its undefined in the specification); and the control point adjustments are merely checking whether the point is close enough on the x-axis (not the y-axis, interestingly), i.e. x-xc = presumably, x= true point of line to be fit; and xc = control point of the spline, with “i" simply indicating which control point.
To further clarify, the Examiner notes that the RSS equation for fitting a spline is not even tied to reinforcement learning, but rather readily found in introductory statistics courses on splines. E.g.
Mackey, Lester, “Lecture 17: Smoothing splines, Local regression, and GAMs”, Stanford University, STATS 202: Data mining and analysis, Oct. 30th, 2015. URL: web(dot)stanford(dot)edu/~lmackey/stats202/content/lec17-condensed(dot)pdf – slide 6:
PNG
media_image5.png
610
492
media_image5.png
Greyscale
To be clear on what spline’s math functions look like, slide 2:
PNG
media_image6.png
682
992
media_image6.png
Greyscale
Should further clarification be sought on the plain meaning of B-splines and NURBs, in view of MPEP § 2111.01(I and III), see:
Breen, “B-Splines and NURBS”, Week 5, Lecture 9, CS 430, Drexel University, accessed via the WayBack Machine, Archive date: Aug. 30th, 2017, URL: cs(dot)drexel(dot)edu/~david/Classes/CS430/Lectures/L-09_BSplines_NURBS(dot)6(dot)pdf:
Slide # 3 notes its origin is “the thin wood or metal strips used in building/ship construction” with the “Goal: define a curve as a set of piecewise simple polynomial functions connected together” and “Popularized in late 1960s”, and slide 35 clarifies (“def’d” appears to be shorthand as for the term defined):
PNG
media_image7.png
339
486
media_image7.png
Greyscale
As a point of clarity, the Examiner notes the claims do not even refer to the spline being a “non-uniform rational basis spline”, but rather any “spline” -– as these are distinct elements set forth expressly in the claim.
Under the broadest reasonable interpretation, the claim recites a mathematical concept – the above limitations are steps in a mathematical concept such as mathematical relationships, mathematical formulas or equations, and mathematical calculations. If a claim, under its broadest reasonable interpretation, is directed towards a mathematical concept, then it falls within the Mathematical Concepts grouping of abstract ideas. In addition, as per MPEP § 2106.04(a)(2): “It is important to note that a mathematical concept need not be expressed in mathematical symbols, because "[w]ords used in a claim operating on data to solve a problem can serve the same purpose as a formula." In re Grams, 888 F.2d 835, 837 and n.1, 12 USPQ2d 1824, 1826 and n.1 (Fed. Cir. 1989). See, e.g., SAP America, Inc. v. InvestPic, LLC, 898 F.3d 1161, 1163, 127 USPQ2d 1597, 1599 (Fed. Cir. 2018)”
See MPEP § 2106.04(a)(2).
To clarify, see the USPTO 101 training examples, available at https://www.uspto.gov/patents/laws/examination-policy/subject-matter-eligibility.
The mental process recited in claim 1 is:
preprocessing the input image by transforming the input image into one or more two-dimensional (2D) orthographic views, the preprocessing comprising generating an orthographic view not included in the input image using a generative adversarial network (GAN) engine trained on one or more footwear component structures to generate the orthographic view; - a mental process, but with mere instructions to automate it in a computer environment (the “GAN”). To clarify, see the plain meaning section above, i.e. this is a math concept that long predates computers, and may have even been done in Ancient Egypt.
in response to the preprocessing of the input image, identifying, via a trained classification engine, one or more visual components of the footwear indicated in the one or more 2D orthographic views by generating one or more bounding boxes around data representing the one or more visual components, the generating of the one or more bounding boxes being based on the classification engine training on a plurality of images that are each labeled with at least one class, each class corresponding to a type of object in a respective image;- a mental observation, but for the mere instructions to do it on a computer/automating the mental process with a computer (the use of the trained classification engine and the generating of the bounding boxes, along with the training, as discussed below this is an insignificant computer implementation; see Aug. 2025 memorandum footnote 17 to Recentive Analytics, and MPEP § 2106.05(g)), e.g. a person, such as a shoe designer or a fashionista, observing a shoe and identifying a visual component of it, e.g. the sole of the shoe
based at least in part on the identifying of the one or more visual components of the footwear, automatically generating, via a reinforcement learning engine, a three-dimensional (3D) computer-aided design (CAD) digital asset comprising a non-uniform rational basis spline (NURBS) model of the footwear, wherein the CAD digital asset comprises a spline that models the one or more visual components, wherein generating the 3D CAD digital asset comprises; iteratively transforming, via a reward function of the trained reinforcement learning engine, one or more control points of the spline until the spline is within a predefined average distance from the one or more visual components and the one or more visual components incorporate at least one of a design or manufacturing constraint; - a mental process with the use of physical aids, but for the mere instructions to do it on a computer.
To be precise, the term “spline” is originally the name of a physical tool for drawing, akin to stencils, rulers, protectors, and the like, before the invention of computer-aided design techniques. In particular, a spline is a thin strip of wood, e.g. balsa, which is used to trace curves on drawing by bending the spline with the use of hooked weights, also called “ducks”, which provide the control points for the bending (evidence below, including images of this tool, and a brief history of how splines in computer-aided design were based on this tool).
The claim is now merely generically invoking the computer data structure of CAD (akin to doing a mental process in the “context of XML” – MPEP § 2106.05(f)); and specifying that it will be for a “NURBs” spline
Using such a physical aid, a person would readily be able to iteratively fit a spline to a 2D image/sketch, by iteratively moving the “ducks” of the spline until the spline was fit to a curved portion of the 2D image/sketch (on paper), thus mentally performing, as a mental evaluation, the claimed feature but for the mere recitations to do it on a computer.
See instant ¶ 22: “A spline may include a curvy pattern used to guide someone shaping something large, such as a boat hull.”
See Alatown, “On the Spline: A Brief History of the Computational Curve (Full)”, Article on the Website Alatown, accessed via Wayback Machine with archive date Mar. 16th, 2019. URL: alatown(dot)com/spline-history-architecture/. See “Figure 3: Hooked weights, called “ducks,” accurately secure a spline – here, no more than a thin strip of balsa – for tracing the hull of a sailing vessel.” And “The practice was refined throughout the northern Mediterranean well into the 1700s.iv As shipbuilding evolved from a craft to a science, drawings replaced full-scale wooden templates. The practice of plotting the patterns for ribs and keels became known as lofting since the attic above the workshop was the only dry unobstructed floorspace large enough to accommodate the 1:1 setting out process. The long curves were scribed onto the timber from a thin flexible strip of timber or steel, called a spline (see Figure 3). The spline is bent and held in place on a flat surface by a series of three or more hooked metal weights, called ducks. An optimally smooth, attractive, and mechanically sound curvature is guaranteed by the uniform distribution of stress throughout the long elastic spline as it tries to regain its original straightness. This quality of even smoothness is known to shipbuilders as “fairness” and is championed because it minimizes the vessel’s drag in the water. By the 1600s, European shipbuilders had begun to rely on smaller scaled drawings for design and contractual documentation. Hand-held mechanical drafting splines were invented to trace the overlaid orthogonal projections and cross sections which set out the ship’s critical underlying geometry.”
See Gallo, Giuseppe, and Fulvio Wirz. "The evolution of the digital curve: from shipbuilding spline to the diffusion of NURBS, subdivision surfaces and t-splines as tools for architectural design." UTOPIA E DISTOPIA NEL PROGETTO DIGITALE (2020): 127. See pages 127-129
See Grandine, Thomas A. "The extensive use of splines at Boeing." SIAM News 38.4 (2005): 3-6. See page 1 including the photograph.
See Nowacki, Horst. "Developments in fluid mechanics theory and ship design before Trafalgar." (2006). Page 8, page 33 ¶ 3
Now, to clarify, see the above discussion of eq. 1 in ¶ 120. And the note above that the claims do not even refer to the spline being a “non-uniform rational basis spline”, but rather any “spline” -– as these are distinct elements set forth expressly in the claim.
There is a simpler, older math function that would readily also work, which is the equation for least squares (from the early 1800’s, generally credited to Carl Gauss, who also contributed in the 1800s to the discovery of the math of regression).
¶ 119 describes doing “straight-line spline”, akin to simply fitting a linear regression using ordinary least squares as was routinely done in the 1800s (example 45; claim 1; see discussion of the history of the Arrhenius equation).
In other words, mathematical techniques that long predate computers (and thus, also mental processes) are readily able to fit the math equation by fitting the coefficients of a linear regression to a “straight line” (¶ 119; also a straight line generally follows the formula y=mx+b).
This claim simply invokes more modern mathematics in a generic but in a particular manner (by claiming the math itself) of the mental process of fitting a line to a set of data points (in this case, in a 2D image, e.g. a drawing of a shoe, see instant figures), something done long before computers, and merely generically invokes machine learning as a tool to automate this abstract idea. August 2025 memorandum, footnote 17, for Recentive Analytics. And does this but for generally linking to a computer environment (in the context of “XML”, or in this case “CAD”; MPEP § 2106.05(f); MPEP § 2106.05(h) has a similar finding regarding “XML tags”).
To clarify, while CAD B-rep models with NURBs do have their data structures themselves for how the data is to be stored in the computer, this claim is not to a particular data structure that’s an improvement to technology, but rather merely to the data itself, wherein the data is, per the specification, merely mathematical representations of geometry in the form of the math equations of B-splines. MPEP § 2106.05(a)(I): “vii. Providing historical usage information to users while they are inputting data, in order to improve the quality and organization of information added to a database, because "an improvement to the information stored by a database is not equivalent to an improvement in the database’s functionality," BSG Tech LLC v. Buyseasons, Inc., 899 F.3d 1281, 1287-88, 127 USPQ2d 1688, 1693-94 (Fed. Cir. 2018); and” – to clarify, see Recentive Analytics, footnote 4, and the Stanford case it points to. See the oral arguments of Recentive Analytics as well for pertinence of the Stanford case. For additional clarification, given that splines are found in course notes in the mathematical field of statistics, see SAP v. InvestPic as cited to in MPEP § 2106.04(a)(2)(I).
And as stated above, the claim does not even require doing the fitting with the NURBS.
Under the broadest reasonable interpretation, these limitations are process steps that cover mental processes including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of physical aids but for the recitation of a generic computer component. If a claim, under its broadest reasonable interpretation, covers a mental process but for the recitation of generic computer components, then it falls within the "Mental Process" grouping of abstract ideas. A person would readily be able to perform this process either mentally or with the assistance of physical aids. See MPEP § 2106.04(a)(2).
To clarify, see the USPTO 101 training examples, available at https://www.uspto.gov/patents/laws/examination-policy/subject-matter-eligibility. In particular, with respect to the physical aids, see example # 45, analysis of claim 1 under step 2A prong 1, including: “Note that even if most humans would use a physical aid (e.g., pen and paper, a slide rule, or a calculator) to help them complete the recited calculation, the use of such physical aid does not negate the mental nature of this limitation.”; also see example # 49, analysis of claim 1, under step 2A prong 1: “Moreover, the recited mathematical calculation is simple enough that it can be practically performed in the human mind. Even if most humans would use a physical aid, like a pen and paper or a calculator, to make such calculations, the use of a physical aid would not negate the mental nature of this limitation.”
As such, the claims recite an abstract idea of both a mental process and mathematical concept.
Step 2A, prong 2
The claimed invention does not recite any additional elements that integrate the judicial exception into a practical application. Refer to MPEP §2106.04(d).
The following limitations are merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f), including the “Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more”:
Claim 1 - at one or more computing machines; Claim 16 – A non-transitory machine-readable medium storing instructions which, when executed by one or more computing machines, cause the one or more computing machines to perform operations comprising: claim 19 – A system comprising: processing circuitry; and a memory storing instructions which, when executed by the processing circuitry, cause the processing circuitry to perform operations comprising:
The following limitations are generally linking the use of a judicial exception to a particular technological environment or field of use, as discussed in MPEP § 2106.05(h):
…footwear… - generally linking to a field of use, when read in view of ¶ 74: “However, the technology disclosed herein is not limited to footwear and may be expanded to any other apparel ( e.g., jeans, leather jackets, skirts, etc.) or non-apparel items ( e.g., tables, chairs, houses, bicycles, baseball mitts, hockey sticks, baseballs, basketballs, etc.)”, also see ¶ 85, and ¶ 100: “Furthermore, the technology described herein is not limited to footwear and may be expanded to any other apparel (e.g., jeans) or apparel components (e.g., pocket(s), belt loop(s), zipper, and button of jeans). The technology may be expanded even further to non-apparel items ( e.g., dining table) or components of non-apparel items (e.g., table leg(s) and tabletop of dining table)” and ¶ 135: “However, the technology disclosed herein is not limited to footwear and may be used for other items ( e.g., apparel, equipment, object(s), etc.) that can be modeled in 2D or 3D. Examples of other items for which embodiments of the technology may be used, in addition to or in place of footwear include: athletic equipment, vehicle tires, children's toys, mobile phone cases, 30 desktop ornaments, holiday decorations, candle holders, and the like.”
As discussed above, the recitations of CAD are merely generally linking it to a particular technological environment (akin to the xml tags of MPEP § 2106.05(h)) and akin to the “context of XML” in MPEP § 2106.05(f).
And, as discussed above, the data itself in CAD (NURBs), in view of the specification, is simply math relationships/equations in geometry, i.e. part of the math concept discussed above.
The following limitations are adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g):
accessing, at one or more computing machines, an input image of footwear, the input image comprising a pixel or voxel based image; in response to the accessing of the input image, - mere data gatherings
… using a generative adversarial network (GAN) engine trained on one or more footwear component structures to generate the orthographic view;… via a reinforcement learning engine - generic in view of ¶ 86, with purely a desired result of what it is trained to do, with no recitation of how; also considered as generally linking to a particular technological environment in view of ¶ 86, as “any other engine” can also do this same thing. See Recentive Analytics and examples 46-47 for their recitations of “using the trained ANN” and the like.
Similar with the reinforcement learning engine as the instant disclosure sheds no light on what the reinforcement learning engine is or how it is used to achieve the claimed functionality (¶¶ 20, 83, 104 129, 137, and elsewhere), but for eq. 1 to be used in the abstract idea as discussed above.
…via a trained classification engine… by generating one or more bounding boxes around data representing the one or more visual components, the generating of the one or more bounding boxes being based on the classification engine training on a plurality of images that are each labeled with at least one class, each class corresponding to a type of object in a respective image;… - both part of the mere instructions to automate an abstract idea with a computer and an insignificant computer implementation, in view of Recentive Analytics. To clarify, see Recentive, then see Instant disclosure ¶ 32: “In some example embodiments, different machine-learning tools may be used. For example, Logistic Regression (LR), Naive-Bayes, Random Forest (RF), neural networks (NN), matrix factorization, and Support Vector Machines (SVJVI) tools may be used for classifying or scoring job postings” and ¶ 33: “Two common types of problems in machine learning are 30 classification problems and regression problems. Classification problems, also referred to as categorization problems, aim at classifying items into one of several category values (for example, is this object an apple or an orange).” And ¶ 53: “Each class 304 may correspond to a type of object in the image 306 (e.g., a digit 0-9, a man or a woman, a cat or a dog, etc.). In one example, the machine learning program is trained to recognize images of the presidents of the United States, and each class corresponds to each president (e.g., one class corresponds to Barack Obama, one class corresponds to George W. Bush, etc.)...For example, if the image 312 is a photograph of Bill Clinton, the classifier recognizes the image as corresponding to Bill Clinton at block 314.” And ¶¶ 54-60: “A machine learning algorithm is designed for recognizing faces, and a training set 302 includes data that maps a sample to a class 304 (e.g., a class includes all the images of purses).” And ¶¶ 96-97 – also, see ¶ 101: “One challenge of machine learning in this field is reducing the search space. The instant disclosure solves the reducing search space challenge by first classifying a first level of bounding boxes then drilling down into each bounding box of the first level to refine identifications” – i.e. while ¶ 101 conveys an improvement to technology, it also conveys that it is generic to using bounding boxes such as in the manner, i.e. the claim does not recite the alleged improvement, but rather merely what was generic.
Should it be found the entire step of the identifying with the classification engine with the bounding boxes has no abstract idea, then the Examiner submits it would be mere data gathering as well.
The recitation of “automatically generating an output representing the 3D CAD digital asset” is considered as mere data outputting/data displaying, wherein the limitations of “the output comprising a digital file of one or more applications including at least one of: a 3D printing application of the footwear, manufacturing instructions for the footwear, an augmented reality (AR) application for the footwear, or a virtual reality (VR) application for the footwear” is considered as generally linking to a particular field of use/technological environment, as well as mere instructions to “apply it” given the generality recited and the results-oriented nature of this limitation (i.e. its merely generically saying what applications the file may be used for; but with no recitation of how the file is to be particularly generated for said applications or what the file would be for said applications, i.e. mere instructions to “apply” the file to one of these applications. It is also considered as part of the token post-solution activity of mere data outputting/data displaying.
A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. See MPEP § 2106.04(d).
The claimed invention does not recite any additional elements that integrate the judicial exception into a practical application. Refer to MPEP §2106.04(d).
Step 2B
The claimed invention does not recite any additional elements/limitations that amount to significantly more.
The following limitations are merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f), including the “Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more”:
Claim 1 - at one or more computing machines; Claim 16 – A non-transitory machine-readable medium storing instructions which, when executed by one or more computing machines, cause the one or more computing machines to perform operations comprising: claim 19 – A system comprising: processing circuitry; and a memory storing instructions which, when executed by the processing circuitry, cause the processing circuitry to perform operations comprising:
The following limitations are generally linking the use of a judicial exception to a particular technological environment or field of use, as discussed in MPEP § 2106.05(h):
…footwear… - generally linking to a field of use, when read in view of ¶ 74: “However, the technology disclosed herein is not limited to footwear and may be expanded to any other apparel ( e.g., jeans, leather jackets, skirts, etc.) or non-apparel items ( e.g., tables, chairs, houses, bicycles, baseball mitts, hockey sticks, baseballs, basketballs, etc.)”, also see ¶ 85, and ¶ 100: “Furthermore, the technology described herein is not limited to footwear and may be expanded to any other apparel (e.g., jeans) or apparel components (e.g., pocket(s), belt loop(s), zipper, and button of jeans). The technology may be expanded even further to non-apparel items ( e.g., dining table) or components of non-apparel items (e.g., table leg(s) and tabletop of dining table)” and ¶ 135: “However, the technology disclosed herein is not limited to footwear and may be used for other items ( e.g., apparel, equipment, object(s), etc.) that can be modeled in 2D or 3D. Examples of other items for which embodiments of the technology may be used, in addition to or in place of footwear include: athletic equipment, vehicle tires, children's toys, mobile phone cases, 30 desktop ornaments, holiday decorations, candle holders, and the like.”
As discussed above, the recitations of CAD are merely generally linking it to a particular technological environment (akin to the xml tags of MPEP § 2106.05(h)) and akin to the “context of XML” in MPEP § 2106.05(f).
And, as discussed above, the data itself in CAD (NURBs), in view of the specification, is simply math relationships/equations in geometry, i.e. part of the math concept discussed above.
The following limitations are adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g):
accessing, at one or more computing machines, an input image of footwear, the input image comprising a pixel or voxel based image; in response to the accessing of the input image, - mere data gatherings
… using a generative adversarial network (GAN) engine trained on one or more footwear component structures to generate the orthographic view;… via a reinforcement learning engine - generic in view of ¶ 86, with purely a desired result of what it is trained to do, with no recitation of how; also considered as generally linking to a particular technological environment in view of ¶ 86, as “any other engine” can also do this same thing. See Recentive Analytics and examples 46-47 for their recitations of “using the trained ANN” and the like.
Similar with the reinforcement learning engine as the instant disclosure sheds no light on what the reinforcement learning engine is or how it is used to achieve the claimed functionality (¶¶ 20, 83, 104 129, 137, and elsewhere), but for eq. 1 to be used in the abstract idea as discussed above.
…via a trained classification engine… by generating one or more bounding boxes around data representing the one or more visual components, the generating of the one or more bounding boxes being based on the classification engine training on a plurality of images that are each labeled with at least one class, each class corresponding to a type of object in a respective image;… - both part of the mere instructions to automate an abstract idea with a computer and an insignificant computer implementation, in view of Recentive Analytics. To clarify, see Recentive, then see Instant disclosure ¶ 32: “In some example embodiments, different machine-learning tools may be used. For example, Logistic Regression (LR), Naive-Bayes, Random Forest (RF), neural networks (NN), matrix factorization, and Support Vector Machines (SVJVI) tools may be used for classifying or scoring job postings” and ¶ 33: “Two common types of problems in machine learning are 30 classification problems and regression problems. Classification problems, also referred to as categorization problems, aim at classifying items into one of several category values (for example, is this object an apple or an orange).” And ¶ 53: “Each class 304 may correspond to a type of object in the image 306 (e.g., a digit 0-9, a man or a woman, a cat or a dog, etc.). In one example, the machine learning program is trained to recognize images of the presidents of the United States, and each class corresponds to each president (e.g., one class corresponds to Barack Obama, one class corresponds to George W. Bush, etc.)...For example, if the image 312 is a photograph of Bill Clinton, the classifier recognizes the image as corresponding to Bill Clinton at block 314.” And ¶¶ 54-60: “A machine learning algorithm is designed for recognizing faces, and a training set 302 includes data that maps a sample to a class 304 (e.g., a class includes all the images of purses).” And ¶¶ 96-97 – also, see ¶ 101: “One challenge of machine learning in this field is reducing the search space. The instant disclosure solves the reducing search space challenge by first classifying a first level of bounding boxes then drilling down into each bounding box of the first level to refine identifications” – i.e. while ¶ 101 conveys an improvement to technology, it also conveys that it is generic to using bounding boxes such as in the manner, i.e. the claim does not recite the alleged improvement, but rather merely what was generic.
Should it be found the entire step of the identifying with the classification engine with the bounding boxes has no abstract idea, then the Examiner submits it would be mere data gathering as well.
The recitation of “automatically generating an output representing the 3D CAD digital asset” is considered as mere data outputting/data displaying, wherein the limitations of “the output comprising a digital file of one or more applications including at least one of: a 3D printing application of the footwear, manufacturing instructions for the footwear, an augmented reality (AR) application for the footwear, or a virtual reality (VR) application for the footwear” is considered as generally linking to a particular field of use/technological environment, as well as mere instructions to “apply it” given the generality recited and the results-oriented nature of this limitation (i.e. its merely generically saying what applications the file may be used for; but with no recitation of how the file is to be particularly generated for said applications or what the file would be for said applications, i.e. mere instructions to “apply” the file to one of these applications. It is also considered as part of the token post-solution activity of mere data outputting/data displaying.
In addition, the following are also considered as well-understood, routine, and conventional activities, as discussed in MPEP § 2106.05(d):
… using a generative adversarial network (GAN) engine trained on one or more footwear component structures to generate the orthographic view;… via a reinforcement learning engine as well as …via a trained classification engine… by generating one or more bounding boxes around data representing the one or more visual components, the generating of the one or more bounding boxes being based on the classification engine training on a plurality of images that are each labeled with at least one class, each class corresponding to a type of object in a respective image;…– this is considered WURC in view of:
The instant disclosure, ¶ 33: “Two common types of problems in machine learning are classification problem and regression problems.”, and for the reinforcement learning engine, see ¶¶ 20, 83, 86, 104 129, 137, and elsewhere as cited and discussed above for the lack of detail in the disclosure for what this even is or how it is to be implemented, then see MPEP 2106.07(a): “A specification demonstrates the well-understood, routine, conventional nature of additional elements when it describes the additional elements as well-understood or routine or conventional (or an equivalent term), as a commercially available product, or in a manner that indicates that the additional elements are sufficiently well-known that the specification does not need to describe the particulars of such additional elements to satisfy 35 U.S.C. 112(a).” – as clarified in MPEP § 2164.01: “A patent need not teach, and preferably omits, what is well known in the art. In re Buchner, 929 F.2d 660, 661, 18 USPQ2d 1331, 1332 (Fed. Cir. 1991); Hybritech, Inc. v. Monoclonal Antibodies, Inc., 802 F.2d 1367, 1384, 231 USPQ 81, 94 (Fed. Cir. 1986), cert. denied, 480 U.S. 947 (1987); and Lindemann Maschinenfabrik GMBH v. American Hoist & Derrick Co., 730 F.2d 1452, 1463, 221 USPQ 481, 489 (Fed. Cir. 1984).”
Bernstein, A. V., and Evgeny V. Burnaev. "Reinforcement learning in computer vision." Tenth International Conference on Machine Vision (ICMV 2017). Vol. 10696. SPIE, 2018. Abstract: “In recent years, Reinforcement learning has been used both for solving such applied tasks as processing and analysis of visual information, and for solving specific computer vision problems such as filtering, extracting image features, localizing objects in scenes, and many others.” And §§ 1-3, including §§ 3.1 -3.2: “Bernstein, 2017, § 3.2, as was cited to particularly: “A large number of non-trivial, powerful techniques devoted to the visual recognition of objects have been developed during the last decades, and RL is one of tools that selects the best descriptor for every image from a given set. The key idea in RL of Visual Classes, where the agent is faced with visual inputs, is to focus the attention of the agent on a small number of very distinctive visual features that allow the agent to reason upon visual classes rather than raw pixels, and that enhance its generalization capabilities [33, 36, 40]. The challenging problem is to refine visual classes dynamically when the agent identifies inconsistencies in earned discounted returns when faced with that class. Therefore, the solution to this problem (the learning algorithm) will have to solve simultaneously a computer vision problem (image classification) and an RL problem (construction of the optimal control law). In other words, the solution requires joining RL technique with a Visual Object Recognition framework. The RL-based visual feature extracting system includes some embedded supervised learning algorithm (particular Image classifier) which have to distinguish between visual inputs and able to refine a class on request by learning a new distinctive visual features that are powerful enough to distinguish any functionally-distinguishable percept. The classifier translates a low-level information (raw values of pixels) into a high-level information (image class) that will itself feed the RL module (algorithm). Based on this information, the RL agent has to inform the image classifier when learning of a new visual class is required. Since there is no external supervisor telling the agent when a refinement is needed, the RL algorithm can only rely on some robust criterion (for example, the aliasing criterion in [33]) able to decide when the classification is not accurate enough”
Szepesvari, Algorithms for Reinforcement Learning, Mar. 12th, 2019, sites(dot)ualberta(dot)ca/~szepesva/papers/RLAlgsInMDPs(dot)pdf – see § 1
Wang, Lingjing, et al. "Unsupervised learning of 3D model reconstruction from hand-drawn sketches." Proceedings of the 26th ACM international conference on Multimedia. 2018. §§ 1-1.1, incl.: “Researchers have explored 3D objects reconstruction from line drawings for more than a decade [13, 19]. Existing research on this topic can be broadly categorized as two of the following: (i) traditional method without learning, and (ii) data-driven learning-based approach…To further advance the field of deep learning, researchers are beginning to apply supervised models to sketch-based 3D volumetric objects reconstruction…” and § 1.2: “Researchers have made considerable progress on 3D objects reconstruction from 2D hand-drawn sketches, especially the use of CNN based supervised networks to learn patterns between 2D sketches and associated 3D ground truth models.”
Han, Xian-Feng, Hamid Laga, and Mohammed Bennamoun. "Image-based 3D object reconstruction: State-of-the-art and trends in the deep learning era." IEEE transactions on pattern analysis and machine intelligence 43.5 (2019): 1578-1604. Abstract: “3D reconstruction is a longstanding ill-posed problem, which has been explored for decades by the computer vision, computer graphics, and machine learning communities. Since 2015, image-based 3D reconstruction using convolutional neural networks (CNN) has attracted increasing interest and demonstrated an impressive performance. Given this new era of rapid evolution, this article provides a comprehensive survey of the recent developments in this field. We focus on the works which use deep learning techniques to estimate the 3D shape of generic objects either from a single or multiple RGB images” – then see § 1
Perez, Yeritza. Semantically-rich as-built 3D modeling of the built environment from point cloud data. Diss. University of Illinois at Urbana-Champaign, 2020. See § 1.1, including # 1-3, then see § 2.1 including ¶¶ 1-2, then see the subsection “Methods that label point cloud data” starting on page 13 incl.: “Another step within the Scan-to-BIM process is, to assign segment labels, a process also known as semantic segmentation. The purpose of labeling the point cloud segments is to facilitate the generation of semantically rich 3D models. Many studies have been conducted to identify outdoor objects (e.g., car, tree, traffic light); furniture objects (e.g., table, chair, and shelf); structural, architectural elements (floor, ceiling, wall); or MEP components (e.g., pipe, duct). A large body of work including [47, 48, 49, 50, 51, 52, 53] focus their efforts on classifying outdoor objects (e.g., cars, trees, vegetation, pole, facade, traffic light). [54, 55, 56, 57] use Neural Networks to classify outdoor scenes that contain vegetation, vehicles, traffic lights, roads, and buildings…” – see the remaining portions of this subsection for more details, then see page 14 ¶ 2: “Once a point cloud is segmented and/or labeled into relevant parts, the next step within the Scan-to-BIM process is to fit geometric surfaces into the segmented parts..” – also, see § 3.2.2 ¶ 2 , then see § 3.2.3, then see § 4.2 including: “Over recent years, the use of Deep Neural Networks for semantic segmenting of point cloud data has gain significant popularity”
Iglesias, Andrés, G. Echevarría, and Akemi Gálvez. "Functional networks for B-spline surface reconstruction." Future Generation Computer Systems 20.8 (2004): 1337-1353. Abstract and § 1
Johnson, Kyle, Clayton Chang, and Hod Lipson. "Neural Network Based Reconstruction of a 3D Object from a 2D Wireframe." arXiv preprint arXiv:1007.2442 (2010). Abstract, § 2
Minto, Ludovico, Pietro Zanuttigh, and Giampaolo Pagnutti. "Deep Learning for 3D Shape Classification based on Volumetric Density and Surface Approximation Clues." VISIGRAPP (5: VISAPP). 2018. § 2
Delanoy, Johanna, et al. "3d sketching using multi-view deep volumetric prediction." Proceedings of the ACM on Computer Graphics and Interactive Techniques 1.1 (2018): 1-22. §§ 1 and 2.1-2.2
Han, Xiaoguang, Chang Gao, and Yizhou Yu. "DeepSketch2Face: a deep learning based sketching system for 3D face and caricature modeling." ACM Transactions on graphics (TOG) 36.4 (2017): 1-12. § 2
Yetiş, Gizem. Auto-conversion from2D drawing to 3D model with deep learning. MS thesis. Middle East Technical University, 2019. Abstract, § 2.2.2
accessing, at one or more computing machines, an input image of footwear, the input image comprising a pixel or voxel based image; in response to the accessing of the input image - this is considered similar to the example WURC activity as discussed in MPEP § 2106.05(d)(II) of: “i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network);”
automatically generating an output representing the 3D CAD digital asset, the output comprising a digital file of one or more applications including at least one of: a 3D printing application of the footwear, manufacturing instructions for the footwear, an augmented reality (AR) application for the footwear, or a virtual reality (VR) application for the footwear. – see MPEP § 2106.05(d)(II): “iv. Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93;”, as well as ¶ 74 of the instant disclosure which gives a generic description of what this output “could be used for”, and ¶ 130 which gives a generic functional description of a “downstream engine” and generic manufacturing machines that may be used; ¶ 86 discusses generally that “machine code…for 3D printing or manufacturing” may be generated but again omits any particular details on how to do this – also, see the below cited evidence for the 3D NURBS CAD models
In addition to the above, the Examiner notes also that the use of 3D NURBS CAD models is WURC, in view of:
Issolio, HIGH TECHNOLOGY PROCESS INNOVATION IN FOOTWEAR DESIGN, POLITECNICO DI MILANO, Master’s Thesis, 2018 § 2.1 starting on page 28, and §§ 1.1-1.3 starting on page 80. Incl. seeing in § 2.1: “Within the fashion industry, many relevant CAD programs have been developed over the past decades. Art- based packages such as Adobe Illustrator, Photoshop, InDesign and SketchbookPro are invaluable, along with 3D programs developed specifically for shoe design and manufacturing such as Shoe Master CAD design, ICad3D and Romans CAD. These original 3D programs for shoe manufacturing taught designers to work from 3D lasts digitized onto the screen that were transformable into 2d flat patterns for sample making and manufacturing. This gave designers a unique insight into the shape and form of new designs instantly. Alternating between the 2D and 3D form gave the ability to instantly alter design details and solve problems that may previously have reminded undiscovered until sample making. It also gave pattern cutters the ability to manipulate and re-cut patterns with speed, saving time and cost.” – and see page 32 ¶¶ 1-2 for more clarification, then see chapter 3 including page 41 ¶¶ 2-4 for its discussion of “3D printing technology” for “manufacturing” as a conventional technology which “in recent years this technology has really started to shake up the fashion world” – see the subchapters which give more details on this, e.g. see page 46 including the figure at the top which describes a commercially available “app” from “Feetz”, and see the pictures in § 3.1 which provide numerous photographs for this example. § 3.2, incl. page 53, discusses the use of 3D printing for shoes by Nike (the instant assignee).
Raffaeli et al., “Advanced computer aided technologies for design automation in footwear industry”, 2011, § 2, page 138, col. 2, third to last paragraph: “In authors’ opinion, as it happens in other fields, IT systems should support the whole shoe development cycle, starting from conceptual design phase. Nowadays in electronic and mechanical product development departments, CAD systems, virtual reality and virtual prototyping tools support the designer in almost every stage of product development. Research efforts should push toward the adoption of these technologies also in the footwear field.”
Xu, Jie. "Development of an integrated platform for online fashion sketch design." (2015). Hong Kong Polytechnic University. PhD Diss. See chapter 2, including § 2.1.1: “Computer-Aided Design (CAD) and Computer-Aided Manufacturing (CAM) are widely used in different industries nowadays” and fig. 2-9 along with its accompanying description. Also, page 33-34, the paragraph split between the pages: “Visualization is one of the key challenges in the development of Web-based CAD system. Generally, different technologies like Virtual Reality (VR), CAD, and CAD viewer (also called Web-based Collaborative Visualization WCV technology) may provide some solutions”.
Buonamici, Francesco, et al. "Reverse engineering modeling methods and tools: a survey." Computer-Aided Design and Applications 15.3 (2018): 443-464. Abstract, §§ 1-2, then see §§ 3-3.4.
Pascual et al., “A RE Methodology to achieve Accurate Polygon Models and NURBS Surfaces by Applying Different Data Processing Techniques”, 2020 – see § 1 ¶¶ 1-3
Bradley, Colin, and Bernadette Currie. "Advances in the field of reverse engineering." Computer-Aided Design and Applications 2.5 (2005): 697-706. See §§ 1-2, and §§ 3.2-3.3. Include seeing in § 1 ¶ 1: “A common interpretation of the phase “reverse engineering”, first used in publications in the 1970s revolves around copying an original. Reverse engineering technology enables the creation of a digital model using data collected from an existing object. Research from areas such as image processing, computer graphics, advanced manufacturing and virtual reality has converged around creating a computer-based representation of the authentic article. In engineering applications, comprehensive reconstruction of the part form is required to recreate the original object while deviating from the measured points by less than a predefined tolerance [note the similarity between this concept and what is presently claimed].”
Butdee, S., and K. Tangchaidee. "Formulation of 3D shoe sizes using scanning camera and CAD modelling." Journal of Achievements in Materials and Manufacturing Engineering 31.2 (2008): 449-455. §§ 1-3
Raffaeli, Roberto, and Michele Germani. "Advanced computer aided technologies for design automation in footwear industry." International Journal on Interactive Design and Manufacturing (IJIDeM) 5 (2011): 137-149. Abstract, § 2.2, and fig. 2
For more particularity on via a trained classification engine… by generating one or more bounding boxes around data representing the one or more visual components, the generating of the one or more bounding boxes being based on the classification engine training on a plurality of images that are each labeled with at least one class, each class corresponding to a type of object in a respective image;… - see Bernstein as discussed above; then see:
Karagiannakos, Sergios. “Localization and Object Detection with Deep Learning”. AI Summer. March 2019. URL: theaisummer(dot)com/Localization_and_Object_Detection/ - “Localization and Object detection are two of the core tasks in Computer Vision , as they are applied in many real-world applications such as Autonomous vehicles and Robotics. So, if you want to work in these industries as a Computer vision specialist or you want to build a relative product , you better have a good grasp of them. But what are they? What Object detection and localization means? …First things first. Let’s do a quick recap of the most used terms and their meaning to avoid misconceptions: …Object detection: Classify and detect all objects in the image. Assign a class to each object and draw a bounding box around it…”, see image for clarification, then see description of “Object Detection”, and its subsections describing how “a fundamental [in the field of endeavor of image recognition] paper” uses an “R-CNN” to do this (a trained classifier). See the subsection on the Faster RCNN, incl: “As the name suggests, FasterRCNN turns out to be much faster than the previous models and is the one preferred in most real-world applications.” – and its overview: “And we can take this a step further. Using the produced feature maps from the convolutional layer, we infer regions proposal using a Region Proposal network rather than relying on an external system. Once we have those proposal, the remaining procedure is the same as Fast-RCNN (forward to ROI layer, classify using SVM and predict the bounding box). The trick part is how to train the whole model as we have multiple tasks that need to be addressed: 1. The region proposal network should decide for each region if it contains an object or not. 2. And it needs to produce the bounding box coordinates. 3.The entire model should classify the objects to categories. 4. And again, predict the bounding box offsets. If you want to learn more about the training part you should check the original paper, but to give you an overview we need to utilize a multitask loss to include all 4 tasks and back propagate this loss to the network.”
PNG
media_image8.png
270
640
media_image8.png
Greyscale
Yu, Hongkai, et al. "Loosecut: Interactive image segmentation with loosely bounded boxes." 2017 IEEE International Conference on Image Processing (ICIP). IEEE, 2017. Abstract: “One popular approach to interactively segment an object of interest from an image is to annotate a bounding box that covers the object, followed by a binary labeling. However, the existing algorithms for such interactive image segmentation prefer a bounding box that tightly encloses the object” – and see § 1. Also see:
Han, Xian-Feng, Hamid Laga, and Mohammed Bennamoun. "Image-based 3D object reconstruction: State-of-the-art and trends in the deep learning era." IEEE transactions on pattern analysis and machine intelligence 43.5 (2019): 1578-1604. § 3.3 ¶ 2
Buonamici et al., “Reverse engineering modeling methods and tools: a survey”, 2018, page 454, the paragraph split between the columns
Lempitsky, Victor, et al. "Image segmentation with a bounding box prior." 2009 IEEE 12th international conference on computer vision. IEEE, 2009. Abstract: “User-provided object bounding box is a simple and popular interaction paradigm considered by many existing interactive image segmentation frameworks” and see § 1
The claimed invention is directed towards an abstract idea of both a mathematical concept and a mental process without significantly more.
Regarding the dependent claims
Claim 2 is further limiting the mathematical concept and adding in additional math calculation in textual form
Claim 3: the training in claim 3 is rejected under a similar rationale as discussed above for the use of the classification engine, as well as an insignificant extra-solution activity that is nominal/tangential to the primary process of the claimed invention, wherein training is WURC in view of the evidence discussed above in claim 1, and the transformations of claim 3 are considered as mental process steps, when read in view of ¶ 96: “One technique which may be applied here generates several line-art variations of the components using 2D transformations such as rotation, scaling, etc. For example, the logo is rotated, or enlarged, or shrunk, or displaced, or any combination of these operations” – e.g. a person mentally observing images, such as on paper, and rotating the image (e.g. on a table, or in their mind, rotating the image 90 degrees), or scaling the image by a mental evaluation/judging, e.g. observing a simple image such as # 702 in fig. 7, and then evaluating/judging to scale it up or down in size, e.g. either in their own mind, or by re-drawing the image at a larger or smaller scale using physical aids, or by re-drawing a portion of the image, e.g. the logo, or mentally visualizing such a change. Neither the claims nor the disclosure describe any particular technological method of the transformation, as such it readily performed mentally or with physical aids (e.g. rotating an image, or a portion of an image, such as when the image is “line art” as discussed in ¶ 96 and elsewhere, i.e. art drawn by a person)
Claim 4 is further limiting the math concept and mental process
Claim 5 is adding additional steps to the mental process of the “iteratively refining…” – e.g. a person observing a 3D version of a shoe, observing a prototype version, or observing a mental visualization of a shoe, judging/evaluating how to improve the shoe (e.g. the person is a cobbler, or a shoe designer), and then visualizing the improved shoe, but for the mere instructions to do this on a computer by doing it with “the 3D CAD digital asset” (which is also WURC in view of the above discussed evidence), followed by a recitation of “providing an output…” which is rejected under a similar rationale as the similar limitation in claim 1 as discussed above
To clarify, claim 5 does not recite any particularity in how the refining step is performed, as such it is nothing more than the trial and error mental process a person, e.g. a shoe designer, would go through to refine a shoe design, but for the mere instructions to do it on a computer
Claim 6 is reciting an additional step in both the math concept and mental process of the “representing…” rejected under a similar rationale as the spline feature of claim 1 as discussed above, and the “iteratively manipulating…” is considered also as both another step in the math concept and the mental process (e.g. adjusting the “ducks” of a physical aid of the spline tool; or doing the same thing with the math function and its control points of the spline as discussed in ¶¶ 22-23), but for the mere instructions to do it on a computer
Claim 7 is a mental step, when read in view of ¶ 92: “A first stage, as illustrated, for example in FIG. 7, includes preprocessing 610 of the original artistic design rendition to generate the orthographic projection 612. This stage takes the original design artwork (line sketch 602, photos 604, and/or 3D model 606) and decomposes it into the orthographic projection 612. The orthographic projection 612 may include 2D orthographic views (e.g., top, left, right, back, front, bottom). If these views are already available, for example with the 2D line-art input, some of these steps in the first stage may be skipped. If some views are missing, these can be artistically generated by the designer [i.e. a person creating new sketches],…” – but for the mere instructions to do it on a computer.
To clarify, a person would readily be able to perform such a transformation – e.g. observing an original image of the footwear at an angle, e.g. from a top-left perspective, and evaluating/judging what the footwear would look like from a side view, a front view, and/or a top view, such as by mentally visualizing this from the original perspective, or with the aid of pen and paper in making new sketches of the new views.
Claim 8 is rejected under a similar rationale as claim 7 – such mental 2D transformations would include, for example, rotating the image/”line art” – which a person is readily able to perform mentally when they view an image
Also, should these not be considered mental steps, then these would be insignificant extra-solution activities of mere data gathering recited at a high level of generality wherein the use of such views for 3D CAD model generation is WURC in view of:
Buonamici, Francesco, et al. "Reverse engineering modeling methods and tools: a survey." Computer-Aided Design and Applications 15.3 (2018): 443-464. § 1 ¶ 3: “In case 2D drawings are available, the orthogonal projections of the part can be processed to extract useful geometric data, as described in [38,42,64]; a comprehensive review of these techniques is provided in [31].”
Zhang, Chao, et al. "Automatic 3D CAD models reconstruction from 2D orthographic drawings." Computers & Graphics 114 (2023): 179-189. § 2: “Automatically reconstructing 3D CAD models from 2D orthographic drawings has been a long-standing problem since the 1970s.”
Claim 8 is rejected under a similar rationale
Claim 9 is considered as an insignificant extra-solution activity of mere data gathering that is WURC in view of the evidence cited above for claims 7-8, as well as Zhou, Hang, et al. "Rotate-and-render: Unsupervised photorealistic face rotation from single-view images." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020. § 1 ¶¶ 1-2: “Face rotation, or more generally speaking multi-view face synthesis, has long been a topic of great research interests, due to its wide applications in computer graphics, augmented reality and particularly, face recognition… With the rapid progress of deep learning and generative adversarial networks (GANs), reconstruction-based methods have been widely applied to face frontalization and rotation [42, 14, 33, 12, 32].” – see § 2.1 subsection “Reconstruction-based Methods” for clarification, as well as § 2.2
The Examiner also notes that the instant disclosure, ¶ 86 and ¶ 92 go into no particular detail about a particular technological implementation of a particular GAN algorithm to implement said claimed functionality.
In addition, the Examiner also notes ¶¶ 74, 85, 100, and 135 as discussed above
Claim 10 is generally linking to a field of use/particular technological environment as well as part of the mere instructions to do it on a computer with such results-oriented limitations, i.e. this is merely stating to, by any means/methods, train the reinforcement learning engine to achieve the desired result of optimizing spline control points (i.e. the points on the spline curves that control the shape of the curve) by being based on constraints of the fields of use. With respect to the discussion of orthographic projection metrics, also see the WURC evidence for claim 8; and to further clarify see ¶¶ 82-83; then see ¶ 79: “A CAD engineer may start by recreating or "copying" the original design into the 3D CAD system. Most of the time consuming and tedious work is in making iterative refinements and adjustments of the 3D copy until some metric matches the original design intent is met (see below for discussion of metrics). An example of a metric may be a 3D orthographic projection to a 2D plane to show closeness to the original 2D version” – i.e. this is merely generally linking the training to the field of use and part of the mere instruction to use a computer to implement the abstract idea itself, wherein the metric is routinely used by people in the field, and the disclosure is merely using this same metric that the CAD engineer works with for the same iterative refinement step, i.e. this is just simply stating to train the algorithm to automate the mental process performed by the CAD engineer by performing the same mental process as the CAD engineer, but with the algorithm using the same metric as the CAD engineer used – to further clarify, in stark contrast, the Examiner notes McRo, for in McRo the rules were not part of the prior subjective mental process, but rather a new, inventive manner to automate the mental process in a manner starkly distinct from the prior subject mental process – see MPEP § 2106.05(a) for its discussion of McRo – and to further clarify, see the discussion of FairWarning IP, LLC v. Iatric Sys., Inc., 839 F.3d 1089, 120 USPQ2d 1293 (Fed. Cir. 2016) in MPEP § 2106.04(d)(2)(III)(C): “The patentee in FairWarning claimed a system and method of detecting fraud and/or misuse in a computer environment, in which information regarding accesses of a patient’s personal health information was analyzed according to one of several rules (i.e., related to accesses in excess of a specific volume, accesses during a pre-determined time interval, or accesses by a specific user) to determine if the activity indicates improper access. 839 F.3d. at 1092, 120 USPQ2d at 1294. The court determined that these claims were directed to a mental process of detecting misuse, and that the claimed rules here were "the same questions (though perhaps phrased with different words) that humans in analogous situations detecting fraud have asked for decades, if not centuries." 839 F.3d. at 1094-95, 120 USPQ2d at 1296”
Claim 11 is generally linking to a field of use of footwear. See ¶¶ 74, 85, 100, and 135 as discussed above, which clarify that the invention as disclosed is “not limited” to footwear but rather is applicable to a wide variety of different fields of use
Claim 12 is considered as mere data gathering that is WURC in view of Yu, Hongkai, et al. "Loosecut: Interactive image segmentation with loosely bounded boxes." 2017 IEEE International Conference on Image Processing (ICIP). IEEE, 2017. Abstract: “One popular approach to interactively segment an object of interest from an image is to annotate a bounding box that covers the object, followed by a binary labeling. However, the existing algorithms for such interactive image segmentation prefer a bounding box that tightly encloses the object” – and see § 1. Also see:
Han, Xian-Feng, Hamid Laga, and Mohammed Bennamoun. "Image-based 3D object reconstruction: State-of-the-art and trends in the deep learning era." IEEE transactions on pattern analysis and machine intelligence 43.5 (2019): 1578-1604. § 3.3 ¶ 2
Buonamici et al., “Reverse engineering modeling methods and tools: a survey”, 2018, page 454, the paragraph split between the columns
Lempitsky, Victor, et al. "Image segmentation with a bounding box prior." 2009 IEEE 12th international conference on computer vision. IEEE, 2009. Abstract: “User-provided object bounding box is a simple and popular interaction paradigm considered by many existing interactive image segmentation frameworks” and see § 1
Claim 13 is reciting additional steps in the mental process of additional mental observations, wherein the additional elements are akin to those in claim 1 and rejected under a similar rationale.
Claim 14 is considered as both mere instructions to “apply it” given the functional, results-oriented recitations with no restriction of how this step is performed, as well as an insignificant extra-solution activity of mere data transmission/outputting that is WURC in view of MPEP § 2106.05(d)(II): “i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network);… iv. Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93…” – also see:
Pascual et al., “A RE Methodology to achieve Accurate Polygon Models and NURBS Surfaces by Applying Different Data Processing Techniques”, 2020. Abstract and § 1 ¶¶ 2-3
Feng, Chen, and Yuichi Taguchi. "FasTFit: A fast T-spline fitting algorithm." Computer-Aided Design 92 (2017): 11-21. § 1 ¶ 1
Issolio, HIGH TECHNOLOGY PROCESS INNOVATION IN FOOTWEAR DESIGN, POLITECNICO DI MILANO, Master’s Thesis, 2018, § 1.3 including: “Another trend in this field is the 3d printing of shoe lasts based on the data of the costumer” and chapter 3 incl. “Since its creation around 1980’s, 3D printing technologies – or Additive Manufacturing- had been mainly used for rapid prototyping within different industries, but in recent years this technology has really started to shake up the fashion world.” , e.g. page 46 ¶ 1 and the figure on the page
Claim 15 is considered as both mere instructions to “apply it” as this an “attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result”, as well as an insignificant extra-solution activity of an insignificant application that is WURC in view of:
Pascual et al., “A RE Methodology to achieve Accurate Polygon Models and NURBS Surfaces by Applying Different Data Processing Techniques”, 2020. Abstract and § 1 ¶¶ 2-3
Feng, Chen, and Yuichi Taguchi. "FasTFit: A fast T-spline fitting algorithm." Computer-Aided Design 92 (2017): 11-21. § 1 ¶ 1
Issolio, HIGH TECHNOLOGY PROCESS INNOVATION IN FOOTWEAR DESIGN, POLITECNICO DI MILANO, Master’s Thesis, 2018, § 1.3 including: “Another trend in this field is the 3d printing of shoe lasts based on the data of the costumer” and chapter 3 incl. “Since its creation around 1980’s, 3D printing technologies – or Additive Manufacturing- had been mainly used for rapid prototyping within different industries, but in recent years this technology has really started to shake up the fashion world.” , e.g. page 46 ¶ 1 and the figure on the page
Claims 17-18 and 20 are rejected under a similar rationale as claims 2-3 as discussed above
The claimed invention is directed towards an abstract idea of both a mathematical concept and a mental process without significantly more.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID A. HOPKINS whose telephone number is (571)272-0537. The examiner can normally be reached Monday to Friday, 10AM to 7 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ryan Pitaro can be reached at (571) 272-4071. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/David A Hopkins/Primary Examiner, Art Unit 2188