Prosecution Insights
Last updated: April 19, 2026
Application No. 18/666,698

SYSTEM, METHOD AND DATA STRUCTURE FOR MAPPING 3D OBJECTS TO 2D SHADED CONTOUR RENDERINGS

Non-Final OA §102§103
Filed
May 16, 2024
Examiner
TUNG, KEE M
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Broken Lines LLC
OA Round
1 (Non-Final)
8%
Grant Probability
At Risk
1-2
OA Rounds
3y 0m
To Grant
18%
With Interview

Examiner Intelligence

Grants only 8% of cases
8%
Career Allow Rate
15 granted / 189 resolved
-54.1% vs TC avg
Moderate +11% lift
Without
With
+10.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
12 currently pending
Career history
201
Total Applications
across all art units

Statute-Specific Performance

§101
9.3%
-30.7% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
17.8%
-22.2% vs TC avg
§112
11.2%
-28.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 189 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Status of Claims Claims 1-3 are currently pending in this application. Claim Objections Claims 1 and 2 are objected to due to minor informalities: a). Features in claim 1 are cited with numerals, the numerals used in parentheses are not considered restricting the claim to only the specific numbered examples and they shall be removed. b). Claim 2 line 5-6 recites “the three dimensional object with the the one or more two dimensional views” shall be “the three dimensional object with the one or more two dimensional views”. Corrections to remove the numerals and redundancy from the claims are required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 2 and 3 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Boulkenafed et al. (2017/0161590). Regarding claim 2, Boulkenafed teaches a computer-implemented method (e.g., a computer-implemented method for recognizing a three-dimensional modeled object from a two-dimensional image. Boulkenafed: [0011] L.1-3) comprising: receiving a three dimensional model of a physical object (e.g., Notably, the present invention does not require any specific constraint to be applied on the input data (the 2D image) for retrieving a 3D modeled object. Boulkenafed: [0039] L.1-3. . The way the model is trained allows to obtain signatures that are adapted to the type of the 3D objects that are stored. Boulkenafed: [0039] L.15-17. In the context of CAD, a modeled object may typically be a 3D modeled object, e.g. representing a product such as a part or an assembly of parts, or possibly an assembly of products. By “3D modeled object”, it is meant any object which is modeled by data allowing its 3D representation. A 3D representation allows the viewing of the part from all angles. For example, a 3D modeled object, when 3D represented, may be handled and turned around any of its axes, or around any axis in the screen on which the representation is displayed. Boulkenafed: [0045] L.1-10); determining, based on the three dimensional model, one or more two dimensional views of the physical object (e.g., The steps S100 to S150 are an example for providing a first set of 2D images rendered from a 3D modeled object, wherein each 3D image is associated to a label. At step S100, a 3D modeled object is provided. Providing a 3D modeled object means that the system that performs the offline stage can access data that allows 3D representation of the object, as defined above. This is performed, e.g. by providing an access to a database that stores at least one 3D modeled object. Boulkenafed: [0060]. For a provided 3D modeled object, several viewpoints are determined, thus forming a plurality of viewpoints on the 3D modeled objects. At least one viewpoint is selected and an image is computed according to the selected viewpoint. Here the term viewpoint means a specific location in a 3D scene (the 3D modeled object is located and rendered in the 3D scene) at which the camera is placed to take a shot, as known in the art. Boulkenafed: [0062]); correlating one or more feature vectors of the three dimensional object with the (e.g., The offline indexing of steps S500 to S510 is represented at the bottom of FIG. 6. 2D images rendered from 3D modeled objects are provided to a feature vector extractor 610, and the extracted features vectors are transmitted to a feature vector indexer 620 that build the index of feature vectors. In an example of FIG. 6, the second to last fully-connected layer of the neural network is extracted. Still in this example, the neural network is a AlexNet CNN, and, this second to last layer contains 4096 neurons. From each 2D rendered image, 4096-dimensional feature vector is extracted. Boulkenafed: [0077]. Next, the computed feature vector of the 2D image provided at step S720 is compared (S740) with the feature vectors of the index that was built at step S500-S510. This comparison is made as known in the art. The comparison uses the provided similarity metric in order to determine what is (or are) the closest feature vectors that are indexed. Thus, one or more matching are obtained between the extracted feature vector of the 2D image provided at step S720 and one or more feature vectors of the index. It is to be understood that no matching can be determined if the discrepancies between the provided 2D image and the 3D modeled objects indexed are too important. For instance, a very low similarity metric value means no match. Boulkenafed: [0089]); outputting a data structure including the on the one or more features (e.g., Once a feature vector have been extracted (step S500), the 3D modeled object from which the corresponding 2D image have been rendered is indexed by using the extracted feature vector. It is to be understood that the corresponding 2D image is the 2D rendered image from which the feature vector have been extracted. The index is a data structure that allows improving location speed of data. The building of the data structure is performed as known in the art. Boulkenafed: [0076]); training a first predictive model utilizing the data structure (e.g., Thus, at step S150, a first set of 2D images rendered from the provided 3D modeled object and labelled is obtained. This first set is provided to the training system used for obtaining a trained model. Boulkenafed: [0069]); and training a second predictive model utilizing the data structure (e.g., The offline learning of steps S100 to S180 is represented on top of FIG. 6. Notably, the training set of a DNN (a CNN in the example of this figure) comprises 2D images provided by the first and second sets. No constraint is required for the both kinds of images. In practice, the training dataset consists of one quarter (¼) of rendered images and three quarters (%) of photos; this allows improving the classification results of the DNN. Other combinations of images might be considered. The best results have been obtained with a training dataset comprising one quarter of rendered images and three quarters of photos. The training set is provided to the DNN learner 600 which hosts the DNN and computed the model obtained at step S180. The DNN learning machine 600 is typically a software application, and is implemented as known in the art. Boulkenafed: [0074]). Regarding claim 3, Boulkenafed teaches a method for generating a data structure for training a predictive model (e.g., at step S150, a first set of 2D images rendered from the provided 3D modeled object and labelled is obtained. This first set is provided to the training system used for obtaining a trained model. Boulkenafed: [0069]. At step S160, a second set of 2D images is provided to the training system, e.g. a DNN. Boulkenafed: [0070] L.1-2), comprising: providing a computer-implemented system including: a memory storing a data structure configured to correlate features between 3D CAD models and corresponding 2D engineering views (e.g., The feature extraction networks are connected to the bottom of each of the Siamese network branches, and back-propagation is used to learn at the same time the feature vectors and the distance metric. This allows engineering a better feature for the comparison task at hand. The model is trained with 2D images that are paired, one 2D image of the pair belongs to the first set and the second one belong to the second set. The two images of a pair are labelled, and the label comprises at least similarity information. Boulkenafed: [0093] L.11-20); and a processor operatively coupled to the memory (e.g., A typical example of computer-implementation of the method is to perform the method with a system adapted for this purpose. The system comprises a processor coupled to a memory. It may further comprise a graphical user interface (GUI). Typically, the memory has recorded thereon a computer program comprising instructions for performing the method. Boulkenafed: [0041] L.1-7); populating the data structure in the memory with model data including a spatial structure and complex geometries and 2D data including multiple standard views depicting the object from various angles (e.g., First, a signature is extracted for each and every media of the collection. The extraction process is typically repeated tens of thousands times; the number of repetition depends at least on the number of media in the collection of media. Second, a structured list is created. This list is usually referred to as an index containing all the signatures and the links to the actual media in the collection. The index is the data structure that allows a fast retrieval of the closest signature to a query. The term feature can be used for designating the signature, that is, the feature are derived values of the provided collection of media and are intended to be informative, non-redundant, facilitating the subsequent learning. Boulkenafed: [0056] L.8-20. The steps S100 to S150 are an example for providing a first set of 2D images rendered from a 3D modeled object, wherein each 3D image is associated to a label. At step S100, a 3D modeled object is provided. Providing a 3D modeled object means that the system that performs the offline stage can access data that allows 3D representation of the object, as defined above. This is performed, e.g. by providing an access to a database that stores at least one 3D modeled object. Boulkenafed: [0060]. For a provided 3D modeled object, several viewpoints are determined, thus forming a plurality of viewpoints on the 3D modeled objects. At least one viewpoint is selected and an image is computed according to the selected viewpoint. Here the term viewpoint means a specific location in a 3D scene (the 3D modeled object is located and rendered in the 3D scene) at which the camera is placed to take a shot, as known in the art. Boulkenafed: [0062]); correlated feature data linking features from the 3D models to respective 2D views (e.g., The number N of 2D images rendered from a 3D modeled object is the same as the number M of viewpoints. However, this number M of viewpoints may be huge and some 2D images obtained from viewpoints can be noisy for the training, that is, useless. In order to limit the number of viewpoints, the positions of the camera can be comprised between the top and the bottom facet of the object bounding box to generate 2D images. Boulkenafed: [0066] L.1-8. FIG. 3 shows an example of ten views of the 3D modeled object 200 of FIG. 2. It is to be understood that the number of 2D images is in general more than ten; e.g. around a hundred images may be generated for each 3D model. The exact number depends on the size of each object (i.e., on the bounding box or any other bounding volume of the 3D modeled object). Boulkenafed: [0067] L.1-7. The offline indexing of steps S500 to S510 is represented at the bottom of FIG. 6. 2D images rendered from 3D modeled objects are provided to a feature vector extractor 610, and the extracted features vectors are transmitted to a feature vector indexer 620 that build the index of feature vectors. In an example of FIG. 6, the second to last fully-connected layer of the neural network is extracted. Still in this example, the neural network is a AlexNet CNN, and, this second to last layer contains 4096 neurons. From each 2D rendered image, 4096-dimensional feature vector is extracted. Boulkenafed: [0077]. Next, the computed feature vector of the 2D image provided at step S720 is compared (S740) with the feature vectors of the index that was built at step S500-S510. This comparison is made as known in the art. The comparison uses the provided similarity metric in order to determine what is (or are) the closest feature vectors that are indexed. Thus, one or more matching are obtained between the extracted feature vector of the 2D image provided at step S720 and one or more feature vectors of the index. It is to be understood that no matching can be determined if the discrepancies between the provided 2D image and the 3D modeled objects indexed are too important. For instance, a very low similarity metric value means no match. Boulkenafed: [0089]. Then, for each feature vector of the index that matches with the extracted feature vector of the 2D image provided at step S720, one or more 3D modeled objects can be identified: the index is a structured list that contains all the signatures (the feature vectors) and the links to the actual media in the collection (the 3D modeled object from which the 2D image associated with a signature have been rendered). Hence, one or more 3D objects that are similar to the object depicted in the image submitted at step S720 are retrieved (S750). Boulkenafed: [0090] L.1-10); and outputting the data structure (e.g., Once a feature vector have been extracted (step S500), the 3D modeled object from which the corresponding 2D image have been rendered is indexed by using the extracted feature vector. It is to be understood that the corresponding 2D image is the 2D rendered image from which the feature vector have been extracted. The index is a data structure that allows improving location speed of data. The building of the data structure is performed as known in the art. Boulkenafed: [0076]. The feature vector of the 2D image is extracted by use of the trained model during offline stage. The extraction is made by a feature vector extractor 800 that can be the same of the one 610 of FIG. 6. The extracted feature vector is then sent to a matching module 810 that is in charge of computing the similarities between the feature vectors according to a similarity metric. The matching module 810 can access the index of feature vectors for performing the comparisons. The matching module 810 provides at its output a list of one or more feature vectors (S740); being understood that the list can be empty if no match was determined. Boulkenafed: [0091] L.2-13). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2(c) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: Determining the scope and contents of the prior art. Ascertaining the differences between the prior art and the claims at issue. Resolving the level of ordinary skill in the pertinent art. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim 1 is rejected under 35 U.S.C. 103 as being unpatentable over Boulkenafed et al. (2017/0161590) in view of Olav3D (“Blender Tutorial: 3D Landscapes to 2D Contour Plot”, Youtube video, Published on 1/10/2023. https://www.youtube.com/watch?v=c2XitoHmps0). Regarding claim 1, Boulkenafed teaches a system (e.g., at steps S110 to S130, 2D images are computed from the 3D modeled object provided at step S100. These computed 2D images are rendered from the 3D modeled object; said otherwise, the 2D images obtained at steps S110 to S130 are synthetic images. A synthetic image (or rendered image) is an image that is computed by a render engine. Boulkenafed: [0061] L.1-7. Notably, this document introduced the idea of including silhouette/contour alignment using dynamic programming in a coarse-to-fine way for search efficiency. Boulkenafed: [0005] L.6-8. A typical example of computer-implementation of the method is to perform the method with a system adapted for this purpose. Boulkenafed: [0041] L.1-3), the system comprising: a processor (A typical example of computer-implementation of the method is to perform the method with a system adapted for this purpose. The system comprises a processor coupled to a memory. It may further comprise a graphical user interface (GUI). Typically, the memory has recorded thereon a computer program comprising instructions for performing the method. Boulkenafed: [0041] L.1-7) configure the system for: receiving, a 3D model input (e.g., Notably, the present invention does not require any specific constraint to be applied on the input data (the 2D image) for retrieving a 3D modeled object. Boulkenafed: [0039] L.1-3. . The way the model is trained allows to obtain signatures that are adapted to the type of the 3D objects that are stored. Boulkenafed: [0039] L.15-17. In the context of CAD, a modeled object may typically be a 3D modeled object, e.g. representing a product such as a part or an assembly of parts, or possibly an assembly of products. By “3D modeled object”, it is meant any object which is modeled by data allowing its 3D representation. A 3D representation allows the viewing of the part from all angles. For example, a 3D modeled object, when 3D represented, may be handled and turned around any of its axes, or around any axis in the screen on which the representation is displayed. Boulkenafed: [0045] L.1-10), generating, based on the 3D model input, a data structure (e.g., The steps S100 to S150 are an example for providing a first set of 2D images rendered from a 3D modeled object, wherein each 3D image is associated to a label. At step S100, a 3D modeled object is provided. Providing a 3D modeled object means that the system that performs the offline stage can access data that allows 3D representation of the object, as defined above. This is performed, e.g. by providing an access to a database that stores at least one 3D modeled object. Boulkenafed: [0060]. For a provided 3D modeled object, several viewpoints are determined, thus forming a plurality of viewpoints on the 3D modeled objects. At least one viewpoint is selected and an image is computed according to the selected viewpoint. Here the term viewpoint means a specific location in a 3D scene (the 3D modeled object is located and rendered in the 3D scene) at which the camera is placed to take a shot, as known in the art. Boulkenafed: [0062]), correlating, the one or more features with the one or more 2D renderings of the physical object (e.g., steps S100 to S150 are an example for providing a first set of 2D images rendered from a 3D modeled object, wherein each 3D image is associated to a label. Boulkenafed: [0060] L.1-3. The offline indexing of steps S500 to S510 is represented at the bottom of FIG. 6. 2D images rendered from 3D modeled objects are provided to a feature vector extractor 610, and the extracted features vectors are transmitted to a feature vector indexer 620 that build the index of feature vectors. In an example of FIG. 6, the second to last fully-connected layer of the neural network is extracted. Still in this example, the neural network is a AlexNet CNN, and, this second to last layer contains 4096 neurons. From each 2D rendered image, 4096-dimensional feature vector is extracted. Boulkenafed: [0077]. Next, the computed feature vector of the 2D image provided at step S720 is compared (S740) with the feature vectors of the index that was built at step S500-S510. This comparison is made as known in the art. The comparison uses the provided similarity metric in order to determine what is (or are) the closest feature vectors that are indexed. Thus, one or more matching are obtained between the extracted feature vector of the 2D image provided at step S720 and one or more feature vectors of the index. It is to be understood that no matching can be determined if the discrepancies between the provided 2D image and the 3D modeled objects indexed are too important. For instance, a very low similarity metric value means no match. Boulkenafed: [0089]); determining, based on the one or more features, a shaded contour rendering of the physical object see 1_1 below); transmitting, to a display device the shaded contour rendering of the physical object (e.g., Then, for each feature vector of the index that matches with the extracted feature vector of the 2D image provided at step S720, one or more 3D modeled objects can be identified: the index is a structured list that contains all the signatures (the feature vectors) and the links to the actual media in the collection (the 3D modeled object from which the 2D image associated with a signature have been rendered). Boulkenafed: [0090] L.1-8. Here, the client computer is the computerized system on which the result of the retrieval has to be displayed; in practice, the client computer is the computer on which the 2D image was provided for triggering the search of similar 3D modeled objects. Boulkenafed: [0090] L.16-20). While Boulkenafed does not explicitly teach, Olav3D teaches: (1_1). determining, based on the one or more features, a shaded contour rendering of the physical object e.g., the generation of 2D contour plots on 3D landscapes; 3D landscape is created; Olav3D: 0:31/1:52 of the video. PNG media_image1.png 392 678 media_image1.png Greyscale It can be seen that the physical object (landscape) includes a plurality of peaks. Array of planes is created and turn into contour lines. Olav3D: 0:58/1:52 PNG media_image2.png 392 676 media_image2.png Greyscale and each plane is shown different shape from top view, the contour lines are obtained and can be modified and render it out later; Olav3D: 1:47/1:52 PNG media_image3.png 390 674 media_image3.png Greyscale Thickness (depth) is added to planes and each plane (slice) shows the landscape at the height of the plane. Olav3D: 1:38/1:52 and 1:40/1:52 PNG media_image4.png 392 672 media_image4.png Greyscale PNG media_image5.png 390 676 media_image5.png Greyscale It can be seen that the different contour shows different shading according to the orientation of the physical object from the light source. The shading of contours (shaded contours) vary from light grey to dark and this illustrates how the shape of the landscape varies); It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Olav3D into the teaching of Boulkenafed because the 2D plots of the landscape show the landscape with different features at different heights of the planes. Conclusion The prior arts made of record and not relied upon is considered pertinent to applicant's disclosure: a). Eriksen (5,734,393) teaches that “A printer driver 24 within printer 20 receives the data and, in response, controls operation of print engine 26. Control includes feeding formatted data to a print head 28, the movement of which is provided by a carriage 30, shown in FIG. 2, controlled by a carriage servo 32. A motor 34 rotates transfer drum 36 about an axis 38. Motor 34 is controlled by a motor controller 40 receiving control data from printer driver 24.” (Eriksen: c.3 L.8-15 and Fig. 2 which shows shading lines (contours) on the cylindrical surfaces). Any inquiry concerning this communication or earlier communications from the examiner should be directed to SING-WAI WU whose telephone number is (571)270-5850. The examiner can normally be reached 9:00am - 5:30pm (Central Time). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SING-WAI WU/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

May 16, 2024
Application Filed
Dec 25, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597174
METHOD AND APPARATUS FOR DELIVERING 5G AR/MR COGNITIVE EXPERIENCE TO 5G DEVICES
2y 5m to grant Granted Apr 07, 2026
Patent 12591304
SYSTEMS AND METHODS FOR CONTEXTUALIZED INTERACTIONS WITH AN ENVIRONMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12586311
APPARATUS AND METHOD FOR RECONSTRUCTING 3D HUMAN OBJECT BASED ON MONOCULAR IMAGE WITH DEPTH IMAGE-BASED IMPLICIT FUNCTION LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12537877
MANAGING CONTENT PLACEMENT IN EXTENDED REALITY ENVIRONMENTS
2y 5m to grant Granted Jan 27, 2026
Patent 12530797
PERSONALIZED SCENE IMAGE PROCESSING METHOD, APPARATUS AND STORAGE MEDIUM
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
8%
Grant Probability
18%
With Interview (+10.6%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 189 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month