Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 27 February 2026 has been entered.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 5-9, 11-16, and 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nobrega et al. (“Interactive 3D content insertion in images for multimedia applications”, hereafter “Nobrega”), and further in view of Highsmith (US 6771276 B1).
Regarding claim 1, Nobrega teaches a method comprising:
detecting a first object in a perspective image that includes one or more vanishing points (p. 169, 3 Implemented Solution: “By analyzing the lines and their intersections it is possible to identify a large set of candidate vanishing points that can be used to observe the main orientation of the scene. Using a k-means based clustering methodology on the candidate vanishing points point cloud, the main vanishing points of the scene are extracted, as in Rother [36], thus acquiring the vanishing points feature (ϕ1).”); and
receiving a second object for insertion into the perspective image (p. 176, 3.7 Video Based Systems: “Advances in 3D reconstruction using handheld cameras are beginning to allow the introduction of virtual objects that interact with the 3D detected scene [48].”);
extracting a plurality of line segments from the first object, wherein the plurality of line segments are each parallel to a line that passes through one of the one or more vanishing points (p. 176, 4.1 Parameter testing: “To obtain the main lines, e images are generated containing the edges of the image, and from each e image, h line sets are extracted. These different edges and line sets increase the level of redundancy by making the algorithm less prone to errors from illumination factors. The probability of finding a set of lines, which have a clear defined vanishing point, increases when there are more line sets from which to choose from. However, each line set calculated increases the processing time required. In the end there are e × h line sets from which to choose the best for vanishing point detection according to different parameters.”).
Nobrega fails to teach generating, from the plurality of line segments, one or more snap points;
generating a perspective bounding box for the second object based on the one or more snap points, the plurality of line segments, and the one or more vanishing points, by modifying a side of a bounding box of the second object to have a slope parallel to a nearest line segment of the plurality of line segments, the perspective bounding box including at least one line segment with an extended portion that passes through a vanishing point; and
inserting the second object into the perspective image based on the perspective bounding box.
Highsmith teaches generating, from a plurality of line segments, one or more snap points (col. 10, line 59 – col. 11, line 10: “FIG. 10 shows a preferred embodiment flowchart 1000 for determining the bounding box of an object, for example the object selected in step 701 of the flowchart of FIG. 7. The selected object may comprise of one or more curves. In step 1001, a curve on the selected object is determined to be the first curve. This may be done arbitrarily or by utilizing an algorithm known to one of skill in the art. In step 1002, a plurality of critical points on the selected curve are determined. In the preferred embodiment, a point on a curve is called a critical point if the first derivative at the point is zero. In step 1003, a determination is made as to whether there are any other curves in the selected object. If there is another curve, then in step 1004, the next curve is selected and the critical points for the selected curves are determined. Once the critical points for all the curves or a representative sample of the curves has been determined for the selected object, then in the preferred embodiment in step 1005, the determined critical points are utilized to obtain a set of control points for the selected object.”);
generating a perspective bounding box for the second object based on the one or more snap points, the plurality of line segments, and the one or more vanishing points, by modifying a side of a bounding box of the second object to have a slope parallel to a nearest line segment of the plurality of line segments, the perspective bounding box including at least one line segment with an extended portion that passes through a vanishing point (col. 12, line 47 – col. 13, line 3: “In step 1402, the perspective envelope box is divided, preferably into a number of segments equal to the number of segments into which the bounding box is divided. In the preferred embodiment, this is accomplished by finding the intersection of the diagonals of the perspective envelope box. The point of intersection of the diagonals corresponds with the center point of the bounding box. In the case of a 1-point perspective graphics environment, such as shown in FIG. 16, a horizontal line parallel to the horizontal edges of the perspective envelope box and passing through the center point is drawn. A second line intersecting the horizontal line and passing through the center point is also drawn such that if the second line is extended beyond the perspective envelope box it would pass through a vanishing point of the perspective envelope box. Thus, the perspective envelope box is initially divided into four (4) segments. Each of these segments is further divided by determining the intersection point of the diagonals, drawing a first line substantially parallel to the horizontal edge, and drawing a second line which passes through the intersection point and would also pass through the vanishing point if extended beyond the perspective envelope box. The process is repeated until the perspective envelope is divided into a desired number of segments.”); and
inserting the second object into the perspective image based on the perspective bounding box (col. 13, lines 45-50: “In the preferred embodiment, when a user selects an object with the perspective tool and moves it on the screen, preferably by clicking and dragging the object, a check is made to determine if the object is on a perspective grid or not. If the object is on a perspective grid, then the object is moved along the grid.”).
It would have been obvious to one familiar in the art prior to the effective filing date of the claimed invention to incorporate the bounding box-based virtual object placement of Highsmith into the content insertion method of Nobrega, as both are in the same field of endeavor of placing virtual 3D objects in a 2D image. Use of such a technology would allow Nobrega to fine-tune the placement of a virtual object. Nobrega acknowledges the usefulness of bounding boxes in facilitating correct illumination and reflection in their own invention (p. 191, 6 Discussion and comparison: “Karsch et al. [20] proposed a system to render synthetic objects in photos with correct illumination. It uses an automatic bounding box detection system proposed by Hedau et al. [16] to detect the main structure of the room. In addition, the user does some annotations to help the system detect surfaces and light sources. The result is a system where objects can be introduced in the scene with the correct illumination and reflectance.”). It would have been obvious to utilize bounding boxes for object placement in light of this.
Regarding claim 2, Nobrega and Highsmith teach the method of claim 1. Highsmith further teaches wherein the plurality of line segments are aligned such that an extended portion of the line segment passes through at least one snap point of the one or more snap points and selected vanishing point of the one or more vanishing points (col. 12, lines 57-61: “A second line intersecting the horizontal line and passing through the center point is also drawn such that if the second line is extended beyond the perspective envelope box it would pass through a vanishing point of the perspective envelope box.”).
Regarding claim 5, Nobrega and Highsmith teach the method of claim 2. Highsmith further teaches wherein generating a perspective bounding box for the second object based on the one or more snap points, the plurality of line segments, and the one or more vanishing points comprises:
determining a horizontal line segment of the bounding box (col. 12, lines 53-57: “In the case of a 1-point perspective graphics environment, such as shown in FIG. 16, a horizontal line parallel to the horizontal edges of the perspective envelope box and passing through the center point is drawn.”); and
adjusting the horizontal line segment based on a nearest snap point such that the horizontal line segment has a slope parallel to a nearest line segment of the plurality of line segments that passes through the snap point and the vanishing point (col. 13, lines 10-17: “Furthermore, although in the embodiment shown in FIG. 16, the horizontal lines are substantially parallel to the horizontal edges of the perspective envelope box, the invention is not so limited. For example, in a 2 point perspective graphics environment these lines are drawn so that the lines pass through the center point and if extended beyond the edges of the perspective envelope box would also pass through the second vanishing point.”).
Regarding claim 6, Nobrega and Highsmith teach the method of claim 1. Highsmith further teaches scaling the perspective bounding box (col. 15, lines 21-24: “In step 1904, the scaled object is applied to the perspective grid preferably as discussed herein with reference to the flowchart of FIG. 7 to provide a scaled perspective view of the object.” NOTE: FIG. 7 referenced above is a flowchart which includes “Determine bounding box of the object” as a step.), wherein scaling comprises decreasing a dimension of the perspective bounding box as a distance between the vanishing point and the perspective bounding box decreases (col. 2, lines 31-38: “By changing the distance between the vanishing points 13 and 14, an object may appear to be translated within three dimension space. Thus, the drawing environment 10' of FIG. 1B comprises a horizon line 11' and a horizontal plane 12'. An object 15' is drawn between the horizon line and the horizontal plane, such that the distance between the vanishing points 13' and 14' is greater than the distance between the vanishing points 13 and 14 of FIG. 1A.”).
Regarding claim 7, Nobrega and Highsmith teach the method of claim 1. Highsmith further teaches generating a snapping line from the first object that forms a line from the first object to the vanishing point (col. 10, lines 44-47: “In a 2-point perspective environment, if the object is to be applied to a floor grid or plane, the user preferably specifies the vanishing point with which a particular floor plane is associated.”);
displaying the snapping line (col. 10, lines 54-58: “However, if desired the perspective environment may be set up such that by moving the object on the screen while in the perspective mode and placing the object in close proximity to a visible grid will cause the object to be snapped to the nearest visible grid.”); and
snapping the perspective bounding box to the snapping line (col. 10, lines 54-58, as above).
Claim 8 is substantially similar to claim 1, and differs only in that it teaches a non-transitory computer-readable medium rather than a method. As such, claim 8 is rejected on a similar basis to claim 1.
Claim 9 is substantially similar to claim 2, except that it depends on the non-transitory computer-readable medium of claim 8 rather than the method of claim 1. As such, it is rejected on a similar basis to claim 2.
Claim 11 is substantially similar to claim 5, except that it depends on the non-transitory computer-readable medium of claim 8 rather than the method of claim 1. As such, it is rejected on a similar basis to claim 5.
Claim 12 is substantially similar to claim 6, except that it depends on claim 8 as opposed to claim 1. As such, it is rejected on a similar basis to claim 6.
Claim 13 is substantially similar to claim 7, except that it depends on claim 8 as opposed to claim 1. As such, it is rejected on a similar basis to claim 7.
Regarding claim 14, Nobrega and Highsmith teach the non-transitory computer-readable storage medium of claim 8. Nobrega further teaches wherein the first object is included within the perspective image prior to receiving the second object (p. 167, 2 Problem Formalization: “The fundamental concept for the proposed system is the possibility of creating mixed and augmented reality applications that take advantage of detected visual features in images of real world spaces.”).
Regarding claim 15, Nobrega teaches extracting a plurality of line segments from a first object in a perspective image that includes a vanishing point (p. 176, 4.1 Parameter testing, as above in claim 1 rejection);
Nobrega fails to teach a system comprising: a memory component; and a processing device coupled to the memory component, the processing device to perform operations comprising: receiving a second object including a bounding box ; identifying a horizontal line segment of the bounding box; computing a distance between each line segment of the plurality of line segments and the horizontal line segment; determining a modified slope of the horizontal line segment based on a nearest line segment of the plurality of line segments that passes through the vanishing point, wherein the modified slope is parallel to the nearest line segment of the plurality of line segments; generating a perspective bounding box by modifying the horizontal line segment of the bounding box using the modified slope of the horizontal line segment; and inserting the second object into the perspective image using the perspective bounding box.
Highsmith teaches a system comprising:
a memory component (col. 5, lines 1-6: “In the preferred embodiment, system 20 includes at least 16 MB of Random Access Memory (RAM) and is associated with a device capable of storing data, such as a hard drive, a compact disk, a floppy disk, a tape, an optical disk, or the like.”); and
a processing device coupled to the memory component (col. 4, line 66 – col. 5, line 1: “In the preferred embodiment, computer system 20 is a processor based system having an operating system, such as Windows.RTM., UNIX, Macintosh.RTM., Linux and the like.”), the processing device to perform operations comprising:
receiving a second object including a bounding box (col. 9, lines 58-62: “In the preferred embodiment, in order to place a particular object on the perspective grid, an object can be selected, for example, by clicking on the object with the perspective tool. The selected object is bounded preferably by a bounding box to create a bounded object.”);
identifying a horizontal line segment of the bounding box (col. 12, lines 53-57, as above in claim 5 rejection);
computing a distance between each line segment of the plurality of line segments and the horizontal line segment (col. 5, lines 24-28: “The horizontal plane 31 comprises a plurality of grid lines drawn from line 34 to the horizon line 32. Preferably these grid lines are equidistant from each other along line 34 and converge at the vanishing point 33.”);
determining a modified slope of the horizontal line segment based on a nearest line segment of the plurality of line segments that passes through the vanishing point, wherein the modified slope is parallel to the nearest line segment of the plurality of line segments (col. 7, lines 39-44: “In cases where the grid environment comprises three vanishing points, the set of points for the vertical grids or walls may be located along a line which is at an angle to the floor line along the vertical axis. Lines are drawn from the set of points to the respective vanishing points.”);
generating a perspective bounding box by modifying the horizontal line segment of the bounding box using the modified slope of the horizontal line segment (col. 12, line 47 – col. 13, line 3, as above in claim 1 rejection);
and inserting the second object into the perspective image using the perspective bounding box (col. 9, lines 58-64: “In the preferred embodiment, in order to place a particular object on the perspective grid, an object can be selected, for example, by clicking on the object with the perspective tool. The selected object is bounded preferably by a bounding box to create a bounded object. In the preferred embodiment, a perspective envelope box is created by using at least in part the bounding box.”).
Highsmith fails to teach extracting a plurality of line segments from a first object in a perspective image that includes a vanishing point.
It would have been obvious to one familiar in the art prior to the effective filing date of the claimed invention to incorporate the bounding box-based virtual object placement of Highsmith into the content insertion method of Nobrega, as both are in the same field of endeavor of placing virtual 3D objects in a 2D image. Use of such a technology would allow Nobrega to fine-tune the placement of a virtual object. Nobrega acknowledges the usefulness of bounding boxes in facilitating correct illumination and reflection in their own invention (p. 191, 6 Discussion and comparison: “Karsch et al. [20] proposed a system to render synthetic objects in photos with correct illumination. It uses an automatic bounding box detection system proposed by Hedau et al. [16] to detect the main structure of the room. In addition, the user does some annotations to help the system detect surfaces and light sources. The result is a system where objects can be introduced in the scene with the correct illumination and reflectance.”). It would have been obvious to utilize bounding boxes for object placement in light of this.
Regarding claim 16, Nobrega and Highsmith teach the method of claim 15. Highsmith further teaches wherein the nearest line segment of the plurality of line segments includes at least one snap point (col. 10, line 59 – col. 11, line 10, as above in claim 1 rejection).
Claim 18 is substantially similar to claim 6, except that it depends on the system of claim 15 as opposed to the method of claim 1. As such, it is rejected on a similar basis to claim 6.
Claim 19 is substantially similar to claim 7, except that it depends on the system of claim 15 as opposed to the method of claim 1. As such, it is rejected on a similar basis to claim 7.
Claim(s) 3-4, 10, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nobrega (“Interactive 3D content insertion in images for multimedia applications”) and Highsmith (US 6771276 B1) as applied to claims 1-2, 6-9, and 12-14 above, and further in view of Baron (US 10672165 B1).
Regarding claim 3, Nobrega and Highsmith teach the method of claim 2, but fail to teach the aspects of claim 3.
Baron teaches generating a number of bins based on an angular distance between a central reference line and each extended portion of each line segment (col. 24, lines 58-62: “A first reference vector, e.g. construction 2070 (FIG. 20B), connecting an image point corresponding to the center of the fiducial mark and the vanishing point is derived for the locale image.” Note here that the “center of the fiducial mark” is considered analogous to a “central reference line.”); and
computing a slope for a bin reference line based on an index from the central reference line, wherein the slope is a product of the index and the angular distance (col. 22, lines 3-5: “Vector 2070 connects the vanishing point with the center of the fiducial mark. The slope of the vector 2070 provides a reference for the tilt of the image.”); and
adjusting a slope of a selected line segment of the plurality of line segments based on the slope of the bin reference line nearest to the selected line segment (col. 24, line 62 – col. 25, line 3: “A second reference vector connecting a deduced image point corresponding to the center of the fiducial mark and the vanishing point is derived for the object image. The slopes of the two vectors are compared. Any difference between the slope of the first reference vector and the slope of the second reference vector is optionally corrected by rotating the object image around the point corresponding to the origin of the object centric viewpoint, or optionally rotating the locale image around the corresponding point. ”).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to utilize the mathematics in Baron’s invention to augment the methods taught in Nobrega and Highsmith, as all are in the same field of endeavor of creating realistic 2D images.
Regarding claim 4, Nobrega, Highsmith, and Baron teach the method of claim 3. Baron further teaches wherein the angular distance is a constant angular distance that represents a diminishing tolerance between each bin of the number of bins (col. 8, lines 18-19: “The distance D between the viewpoint and point 107 is constant for all of the views”).
Claim 10 is substantially similar to claim 3, except that it depends on the non-transitory computer-readable medium of claim 9 rather than the method of claim 2. As such, it is rejected on a similar basis to claim 3.
Regarding claim 17, Nobrega and Highsmith teach the system of claim 16, but fail to teach the elements of claim 17.
Baron teaches generating a number of bins that define a plurality of regions in the perspective image, wherein a bin comprises a first bin reference line and a second bin reference line that are separated by a constant angular distance (col. 31, lines 49-56: “FIG. 35A shows screen shot, 3500, of an example second embodiment of the computer user interface for defining vertices of the enclosing rectangular cuboid of an object using a predetermined two dimensional image. In said second embodiment, seven vertices, 1215, 1225, 1235, 1245, 1255, 1265, and 3510, are defined in conjunction with the nine connecting optional lines, 1210, 1220, 1230, 1240, 1250, 1260, 1270, 3511, and 3513.”); and
computing a slope for the first bin reference line or the second bin reference line based on an index from a central reference line, wherein the slope is a product of the index and the constant angular distance (col. 12, lines 6-14: “When the origin falls along the edge CL: The image space x-coordinate, X.sub.CLvpo, of the object centric viewpoint origin is calculated as follows:
X.sub.CLvpo=CX+(L.sub.CL−((L.sub.CL/((L.sub.CD/L.sub.LK)+1))*abs(cos(a tan(b.sub.CL)))),
where L.sub.CL is the length of the edge bounded by the vertices C and L, L.sub.LK is the length of the edge bounded by vertices L and K, and b.sub.CL is the slope of the vector fit to vertices C and L.”); and
selecting the nearest line segment from the plurality of line segments, wherein a first bin including the nearest line segment has an index adjacent to a second bin including the horizontal line segment (col. 24, line 57 – col. 25, line 4: “In step 2238, the effect of differential camera tilt between the object image and locale image is mitigated. A first reference vector, e.g. construction 2070 (FIG. 20B), connecting an image point corresponding to the center of the fiducial mark and the vanishing point is derived for the locale image. A second reference vector connecting a deduced image point corresponding to the center of the fiducial mark and the vanishing point is derived for the object image. The slopes of the two vectors are compared. Any difference between the slope of the first reference vector and the slope of the second reference vector is optionally corrected by rotating the object image around the point corresponding to the origin of the object centric viewpoint, or optionally rotating the locale image around the corresponding point.”).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to utilize the mathematics in Baron’s invention to augment the methods taught in Nobrega and Highsmith, as all are in the same field of endeavor of creating realistic 2D images.
Allowable Subject Matter
Claim 21 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-19 and 21 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN A BARHAM whose telephone number is (571)272-4338. The examiner can normally be reached Mon-Fri, 8:30am-5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu, can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RYAN ALLEN BARHAM/Examiner, Art Unit 2613
/XIAO M WU/Supervisory Patent Examiner, Art Unit 2613