Prosecution Insights
Last updated: April 19, 2026
Application No. 18/527,325

Layout for Projected Displays

Final Rejection §103
Filed
Dec 03, 2023
Examiner
CHANG, DANIEL CHEOLJIN
Art Unit
2669
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
2 (Final)
89%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
117 granted / 132 resolved
+26.6% vs TC avg
Moderate +12% lift
Without
With
+11.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
25 currently pending
Career history
157
Total Applications
across all art units

Statute-Specific Performance

§101
8.1%
-31.9% vs TC avg
§103
53.4%
+13.4% vs TC avg
§102
14.1%
-25.9% vs TC avg
§112
20.7%
-19.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 132 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicants This communication is in response to the amendment filed on 1/30/2026. Claim 1-6, 9-17, 19 and 21-24 are pending. Claim 7, 8, 18 and 20 have been cancelled and Claim 21-24 have been newly added. Response to Arguments Applicant’s arguments with respect to claim(s) 1-6, 9-17, 19 and 21-24 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1, 4, 17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Cederlof (US Patent No. 9,723,293) in view of RAMON (U.S. Publication No. 2016/0337602). Regarding claim 1, Cederlof teaches a method comprising: accessing an image of a scene (Column 8, line 2-3, the cameras 110 access the different projection areas 104(1)-(5) in the environment 100; Column 12, line 38-43, The image captured by the camera 410 is processed by the analysis module 122 to identify projection surfaces … and protruding objects … within the scene 402) comprising a display device and at least a portion of an environment of the display device (Column 12, line 38-43, The image captured by the camera 410 is processed by the analysis module 122 to identify projection surfaces … and protruding objects … within the scene 402; Column 5, line 18-20 and 24-25, these usable projection areas 104(1)-(5) represent flat projection surfaces within the environment that are free from objects that protrude from the surface … protruding objects, such as picture frames, televisions; FIG. 1), the portion of the environment comprising a surface in the vicinity of the display device (Column 5, line 29-31, The first area 104(1), for instance, comprises the portion of the left-most wall excluding the television hanging from the wall); determining, based on the portion of the environment of the display device, a region of the surface that is available to project one or more content items onto (Column 7, line 2-6, the analysis module 122 may identify flat projection surfaces, protruding objects, and the usable projection areas 104(1)-(5) comprising the projection surfaces less the portions of these surfaces occluded by the objects; Column 5, line 26-29, the first three illustrated usable projection areas (or “projection areas”) 104(1)-(3) comprise portions of walls within the environment that are free from protruding objects; Column 3, line 59-60, the system may project this content adjacent to the television such that the user can easily view both the movie/show and the supplemental content) … projecting, by a projector, the one or more content items (Column 6, line 48-49, cause the projection module 124 to project the content via the projector 108) onto the region of the surface (Column 7, line 32-39, After identifying, and potentially tailoring, the usable projection areas 104(1)-(5), the analysis module 122 may provide this information to the projection module 124. The projection module 124 may then obtain the content that is to be projected, and may pass this content to the projector 108 along with an indication of where the content is to be projected within the environment. In response, the projector 108 may project the content at the indicated location). Cederlof does not expressly teach dividing the region of the surface that is available to project one or more content items onto into a plurality of subregions by defining a size, a shape, and a location of each subregion in the region of the surface; determining a layout for projecting each of the one or more content items onto one of the plurality of subregions by evaluating, based at least on the size and shape of each subregion, a plurality of potential layouts projecting the one or more content items onto the plurality of subregions; and … according to the determined layout. However, RAMON teaches dividing the region of the surface that is available to project one or more content items onto into a plurality of subregions ([0051] FIG. 1 shows an illustration of prior-art where sub-areas for new content are created by recursive bisection. The initial area in 10 can be divided by horizontal- or vertical bisection, as shown in 11. The vertically bisected area is then further divided in three ways, as shown in 12. FIG. 1; [0052] An area called the “canvas” is designated by reference number 20. All content to be displayed will reside within this area ... 27 in FIG. 2b) shows the result of such a division. The canvas 27 is split into sub-area 30 and 31 by the line 29) by defining a size, a shape, and a location of each subregion in the region of the surface ([0092] The software can optionally include code segments which when executed on a processing engine provide a method step or a means for making the size of each respective sub-area proportional to the number of symbols of the video image data assigned to it; [0095] The software can include code segments which when executed on a processing engine provide a method step or a means for dividing a display area can efficiently in any number of arbitrary polygons, and in particular rectangles); determining a layout for projecting each of the one or more content items onto one of the plurality of subregions by evaluating, based at least on the size and shape of each subregion, a plurality of potential layouts projecting the one or more content items onto the plurality of subregions; and ([0023] the rule associated with a given sub-subarea governs a position and/or a size of video image data within said given sub-area, relative to said given sub-area and/or relative to a position of other video image data allocated to said given sub-area; [0024] size and position based rules are easy to define, and efficiently lead to a good, e.g. optimal usage of the display area on the basis of geometrical criteria; [0053] In this optimization process, more than one arrangement of the sub-areas is evaluated and the best resulting layout is chosen; [0097] comparing different layouts, i.e. sub-area allocations, in an objective way, and the selection of a good, e.g. an optimal layout can be performed automatically) … according to the determined layout ([0097] an optimal layout can be performed automatically; [0018] different layouts, i.e. sub-area allocations, can be compared in an objective way, and the selection of a satisfactory layout, e.g. an optimal layout can be performed automatically). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of Cederlof to incorporate the step/system of managing the position/size of content in sub-areas by evaluating multiple potential layouts to find an optimal layout based on shape and size taught by RAMON. The suggestion/motivation for doing so would have been to improve the optimal use of the display area by dividing the display area into sub-areas ([0016] It is an advantage of these embodiments that a display area can efficiently be divided into sub-areas by any number of arbitrary polygons, and in particular rectangles; [0022] It is an advantage of this embodiment that video streams can be arbitrarily combined within a given sub-area, which can lead to more optimal use of the total display area). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predicted results. Therefore, it would have been obvious to combine Cederlof with RAMON to obtain the invention as specified in claim 1. Regarding claim 4, the combination of Cederlof and RAMON teaches all the limitations of claim 1 above. Cederlof teaches Cederlof teaches wherein determining, based on the portion of the environment of the display device, the region of the surface that is available to project one or more content items onto comprises (Column 7, line 2-6, the analysis module 122 may identify flat projection surfaces, protruding objects, and the usable projection areas 104(1)-(5) comprising the projection surfaces less the portions of these surfaces occluded by the objects; Column 5, line 26-29, the first three illustrated usable projection areas (or “projection areas”) 104(1)-(3) comprise portions of walls within the environment that are free from protruding objects): detecting, based on the image, an exclusion area of the surface comprising one or more areas of the surface that are obscured by one or more objects, textures, or colors (Column 7, line 5-6, the analysis module 122 may identify flat projection surfaces, protruding objects, and the usable projection areas 104(1)-(5) comprising the projection surfaces less the portions of these surfaces occluded by the objects); and determining a usable area of the surface in the vicinity of the display device, the usable area comprising an area of the surface in the vicinity of the display device that does not include the exclusion area (Column 5, line 29-31, The first area 104(1), for instance, comprises the portion of the left-most wall excluding the television hanging from the wall; Column 5, line 37-47, the ARFN 102 may identify usable projection areas on tables within the environment 100, a floor of the environment 100, and/or other projection surfaces within the environment 100, which may be flat, curvilinear, or another shape. The projection area 104(4), for instance, illustrates that the ARFN 102 has identified the exposed portion of the table that is free from the objects residing thereon as a usable projection area within the environment. In addition, the ARFN 102 has identified the floor, less the area occluded by the illustrated table and chair, as a usable projection area 104(5)). With respect to claim 17, arguments analogous to those presented for claim 1, are applicable. With respect to claim 19, arguments analogous to those presented for claim 1, are applicable. Claim 2 and 3 are rejected under 35 U.S.C. 103 as being unpatentable over Cederlof (US Patent No. 9,723,293) in view of RAMON (U.S. Publication No. 2016/0337602) and further in view of Chernichenko et al. (U.S. Publication No. 2005/0179688) (hereafter, “Chernichenko"). Regarding claim 2, the combination of Cederlof and RAMON teaches all the limitations of claim 1 above. The combination of Cederlof and RAMON does not teach wherein the image comprises a skewed perspective relative to an orientation of the surface, and the method further comprises: determining, based on one or more geometric features of the scene, a transformation of the image to deskew the perspective; and transforming the image according to the determined transformation. However, Chernichenko teaches wherein the image ([0074] procedure 900 identifies an imaged scene to process (block 902). The imaged scene may be a photograph, a digitized image, or other representation of a scene) comprises a skewed perspective relative to an orientation of the surface (FIG. 2; [0019] an exemplary object such as the rectangle B (labeled 214), when rotated out of the plane parallel to the camera plane by angle β about an axis parallel to Y, will undergo perspective distortion ... When imaged, the object will appear similar to the image shown in corresponding circle 216), and the method further comprises: determining, based on one or more geometric features of the scene, a transformation of the image to deskew the perspective ([0074] The procedure then identifies four selected points within the image (block 904). These four points are identified, for example, by a user and form a quadrilateral shape ... two perspective vanishing points are defined using the four points identified above (block 906); [0075] Procedure 900 continues by determining a first set of reference points within the image (block 908). This first set of reference points may be determined, for example, by selecting reference points that define a rectangle or square within the quadrilateral shape ... The procedure transforms the first set of reference points to determine a second set of reference points (block 910). This transformation may include applying a transformation matrix to the first set of reference points. Procedure 900 continues by estimating an aspect ratio based on the second set of reference points (block 912). The first set of reference points are then modified to have the same aspect ratio as the estimated aspect ratio (block 914)); and transforming the image according to the determined transformation ([0075] the procedure transforms the image by mapping the second set of reference points onto the modified first set of reference points (block 916). Completing the above procedure results in a correction of perspective distortion in the imaged scene). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of Cederlof and RAMON to incorporate the step/system of correcting skewed perspective in the image by calculating/applying the transformation based on geometric features taught by Chernichenko. The suggestion/motivation for doing so would have been to reduce the effects of peripheral distortion of image ([0017] The systems and methods discussed herein reduce or eliminate the effects of peripheral distortion from images acquired by a camera or other image capture device). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predicted results. Therefore, it would have been obvious to combine Cederlof and RAMON with Chernichenko to obtain the invention as specified in claim 2. Regarding claim 3, the combination of Cederlof and RAMON with Chernichenko teaches all the limitations of claim 2 above. The combination of Cederlof and RAMON does not teach wherein the one or more geometric features comprise at least one of: a shape of the display device in the image; a shape of an object in the image; or a content displayed on the display device in the image. However, Chernichenko teaches wherein the one or more geometric features comprise at least one of: a shape of the display device in the image; a shape of an object in the image; or a content displayed on the display device in the image (FIG. 2; [0019] When imaged in a camera, an object lying parallel to XY plane 208, such as rectangle A (labeled 212), undergoes no perspective distortion ... an exemplary object such as the rectangle B (labeled 214) … will undergo perspective distortion ... When imaged, the object will appear similar to the image shown in corresponding circle 216; [0022] The various systems and methods described herein provide perspective correction of an imaged scene, while still maintaining a good approximation of the aspect ratios of objects contained in the imaged scene). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of Cederlof and RAMON to incorporate the step/system of leveraging the object's rectangular shape in the image as geometric properties taught by Chernichenko. Motivation for this combination has been stated in claim 2. Claim 5, 6, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Cederlof (US Patent No. 9,723,293) in view of RAMON (U.S. Publication No. 2016/0337602) and further in view of NATORI (U.S. Publication No. 2022/0394220). Regarding claim 5, the combination of Cederlof and RAMON teaches all the limitations of claim 4 above. The combination of Cederlof and RAMON does not expressly teach further comprising dividing the usable area of the surface into a number of potential projection areas, wherein the number of potential projection areas equals a number of the one or more content items. However, NATORI teaches further comprising dividing the usable area of the surface into a number of potential projection areas ([0052] It is possible for the projection setter 113 to control the display system 1000 so that the projector 1A and the projector 1B project the videos in accordance with the setting data 122. In this case, the projection setter 113 designates presence or absence of the division of the projection area 42, the number of the areas into which the projection area 42 is divided, the videos to be displayed in the respective areas obtained by dividing the projection area 42; [0025] The projectors 1 project image light on a screen SC as a projection surface ... it is also possible to use a wall surface of a building), wherein the number of potential projection areas equals a number of the one or more content items ([0074] The projection setter 113 of the projector 1A uses the large area 4 as three areas based on the fact that the three videos are output in the display system 1000. In this case, the projection setter 113 divides the large area 4 into three areas different in area in accordance with the priorities determined by the priority judge 114. Specifically, the large area 4 is divided into the small area 4A equal to the projection area 41, and a small area 4C and a small area 4D included in the projection area 42) and each of the plurality subregions corresponds to one of the potential projection areas ([0069] The projection setter 113 assigns the first video P1 to the small area 4A, and assigns the second video P2 to the small area 4B. Thus, as shown in FIG. 5, the first video P1 is displayed in the projection area 41, and the second video P2 is displayed in the projection area 42. The small areas each correspond to an example of a sub-area). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of Cederlof and RAMON to incorporate the step/system of dividing the projection area into a specific number of sections that corresponds to the number of videos taught by NATORI. The suggestion/motivation for doing so would have been to improve the display control by automatically adjusting the display of videos across a large screen area ([0095] According to the display control method related to the present disclosure, it is possible to appropriately assign the videos to the large area 4 to display the videos in accordance with the number of the videos output by the PCs 2 without placing a burden on the user who operates the PCs 2). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predicted results. Therefore, it would have been obvious to combine Cederlof and RAMON with NATORI to obtain the invention as specified in claim 5. Regarding claim 6, the combination of Cederlof and RAMON with NATORI teaches all the limitations of claim 5 above. The combination of Cederlof and RAMON does not expressly teach wherein the potential projection areas each comprise a rectangular region. However, NATORI teaches wherein the potential projection areas each comprise a rectangular region (FIG. 7; [0074] the large area 4 is divided into the small area 4A equal to the projection area 41, and a small area 4C and a small area 4D included in the projection area 42). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of Cederlof and RAMON to incorporate the step/system of using projection area having rectangular regions taught by NATORI. Motivation for this combination has been stated in claim 5. Regarding claim 16, the combination of Cederlof and RAMON teaches all the limitations of claim 1 above. The combination of Cederlof and RAMON does not expressly teach further comprising determining a number of the one or more content items to project by the projector based on at least one of (1) a content displayed on the display device or (2) one or more user preferences. However, NATORI teaches further comprising determining a number of the one or more content items to project by the projector based on at least one of (1) a content displayed on the display device or (2) one or more user preferences ([0108] it is possible for the projection area identifier 222 to identify the number of, and the positional relationship between, the projection areas based on the content input by the PC input 25; [0022] The PCs 2 are so-called video sources; [0024] the projectors 1 project the videos based on the video data output by the PCs 2 to the projectors 1; [0061] The PC display 24 displays an image or a video on the display device). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of Cederlof and RAMON to incorporate the step/system of identifying the number of the projection areas based on the content input taught by NATORI. Motivation for this combination has been stated in claim 5. Claim 14, 22 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Cederlof (US Patent No. 9,723,293) in view of RAMON (U.S. Publication No. 2016/0337602) and further in view of Zavesky et al. (U.S. Publication No. 2023/0353715) (hereafter, “Zavesky"). Regarding claim 14, the combination of Cederlof and RAMON teaches all the limitations of claim 1 above. The combination of Cederlof and RAMON teaches further comprising: determining, based on the image of the scene, one or more visual attributes comprising one or more of a color scheme of the scene or a geometric pattern of the scene (Column 8, line 18-24, the analysis module 122 may identify the projection surface with reference to depth and/or color information captured by the cameras 110. For instance, the analysis module 122 may identify the illustrated wall based on the fact that the wall is generally flat and has a predominant background color that is consistent for a substantial portion of the surface); and customizing a visual parameter of at least one of the or more content items (Column 3, line 7-9, In addition to identifying and utilizing usable projection areas as described above, the techniques described herein may customize the projection of the content; Column 9, line 13-14, the ARFN 102 may tailor the content and/or the projection of the content; Column 16, line 4-5, the ARFN 102 may alter a size of the projected content 1004). The combination of Cederlof and RAMON does not expressly teach based on the determined one or more visual attributes. However, Zavesky teaches customizing … at least one of the or more content items based on the determined one or more visual attributes ([0085] The projection processor 222 identifies a target surface within the proximal physical environment 225, determines an understanding of the target surface and modifies the content, e.g., the original image 227 to obtain a modified image 228, responsive to the identification and understanding obtained for the target surface; [0082] The surface evaluation module 212 may analyze a surface for projection and/or display characteristics, e.g., using target surface for visualizations ... The surface evaluation module 212 may characterize and/or otherwise evaluate, compare and contrast surfaces according to generally known reflectivity and/or refraction values, e.g., constants. The surface evaluation module 212 may rely on other surface features, such as size characterization, textural, density estimation for materials, e.g., a fabric or glass). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of Cederlof and RAMON to incorporate the step/system of modifying content based on the identified surface characteristics taught by Zavesky. The suggestion/motivation for doing so would have been to improve the projection result ([0021] Other adaptations are provided source content prior to projection to improve and/or otherwise enhance the projection result; [0045] The view adjustment may include one or more adaptations to the image content adapted to enhance, improve and/or otherwise prepare the content for presentation on the target surface(s)). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predicted results. Therefore, it would have been obvious to combine Cederlof and RAMON with Zavesky to obtain the invention as specified in claim 14. With respect to claim 22, arguments analogous to those presented for claim 14, are applicable. With respect to claim 24, arguments analogous to those presented for claim 14, are applicable. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Cederlof (US Patent No. 9,723,293) in view of RAMON (U.S. Publication No. 2016/0337602) and further in view of Singh et al. (U.S. Publication No. 2021/0144346) (hereafter, “Singh"). Regarding claim 15, the combination of Cederlof and RAMON teaches all the limitations of claim 1 above. The combination of Cederlof and RAMON does not expressly teach further comprising determining the one or more content items to project by the projector based on at least one of (1) a content displayed on the display device or (2) one or more user preferences. However, Singh teaches further comprising determining the one or more content items to project by the projector based on at least one of (1) a content displayed on the display device or (2) one or more user preferences ([0026] a user of the projection device 102 may wish to project the movie “The Dark Knight” for viewing. Projection device 102 receives the movie “The Dark Knight” to be projected on the target area. In another example, as illustrated in FIG. 2, the user may wish to play the video game “Fortnite” by projecting the characters from the game on the target area. Projection device 102 receives the content for the game “Fortnite” to be projected on the target area). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of Cederlof and RAMON to incorporate the step/system of determining content item(s) to project by the projector based on user preferences. taught by Singh. The suggestion/motivation for doing so would have been to improve the viewing experience for the user ([0003] the systems and methods provided below identify a plurality of candidate areas, automatically determine display conditions of each of the candidate areas, determine visual characteristics of the content to be projected, and select a target area on which to project the content to enhance the viewing experience for the user; [0004] the content attributes may include, for example, information on size, texture, color, reflectivity, intensity, and/or amount of ambient light that would result in an improved viewing experience). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predicted results. Therefore, it would have been obvious to combine Cederlof and RAMON with Singh to obtain the invention as specified in claim 15. Claim 9, 10, 21 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Cederlof (US Patent No. 9,723,293) in view of RAMON (U.S. Publication No. 2016/0337602) and further in view of Singh et al. (U.S. Publication No. 2021/0144346) (hereafter, “Singh") and Zavesky et al. (U.S. Publication No. 2023/0353715) (hereafter, “Zavesky"). Regarding claim 9, the combination of Cederlof and RAMON teaches all the limitations of claim 1 above. The combination of Cederlof and RAMON does not expressly teach further comprising determining the layout based on a layout score provided by a trained neural network. However, Singh teaches further comprising determining the layout based on a layout score provided ([0005] the control circuitry analyzes the captured images to determine a number of candidate area characteristics such as size, texture, color, reflectivity, intensity, and/or amount of ambient light. The control circuitry determines a score for each of the candidate area characteristics by, for example, accessing a database storing scores corresponding to the size, texture, color, reflectivity, intensity, and/or amount of ambient light on the candidate areas). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of Cederlof and RAMON to incorporate the step/system of determining the candidate area characteristics based on the score taught by Singh. The suggestion/motivation for doing so would have been to improve the viewing experience for the user ([0003] the systems and methods provided below identify a plurality of candidate areas, automatically determine display conditions of each of the candidate areas, determine visual characteristics of the content to be projected, and select a target area on which to project the content to enhance the viewing experience for the user). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predicted results. The combination of Cederlof and RAMON with Singh does not expressly teaches ... by a trained neural network. However, Zavesky teaches determining the layout … by a trained neural network ([0065] aspects of machine learning, e.g., including deep neural networks, may be utilized by the target surface understanding module 213 to facilitate estimations of understandings of physical target surfaces from image and/or other sensor data; [0082] The surface evaluation module 212 may analyze a surface for projection and/or display characteristics, e.g., using target surface for visualizations. Without limitation, such analysis may include any of the examples disclosed herein, including computer vision (CV) techniques; [0063] the target surface understanding module 213 may apply one or more fabric simulation techniques, e.g., evaluating types of textiles or fabric). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of combination of Cederlof and RAMON with Singh to incorporate the step/system of determining the display characteristics by deep neural networks taught by Zavesky. The suggestion/motivation for doing so would have been to improve the projection result ([0021] Other adaptations are provided source content prior to projection to improve and/or otherwise enhance the projection result; [0045] The view adjustment may include one or more adaptations to the image content adapted to enhance, improve and/or otherwise prepare the content for presentation on the target surface(s)). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predicted results. Therefore, it would have been obvious to combine Cederlof, RAMON and Singh with Zavesky to obtain the invention as specified in claim 9. Regarding claim 10, the combination of Cederlof, RAMON and Singh with Zavesky teaches all the limitations of claim 9 above. Singh teaches further comprising: for each of a plurality of potential layouts ([0049] FIG. 4 is an embodiment of a process 400 for calculating a quality-of-projection indicator for a plurality of candidate areas), encoding the image and the potential layout ([0050] At step 404, control circuitry 504 analyzes the captured image to determine one or more candidate area characteristics. For example, control circuitry 504 determines one or more candidate area characteristics such as a size of the available region for projection on the candidate area (e.g., wall 104 has a larger area available for projection than the surface 108 on the desk), texture of the candidate area, color of the candidate area, reflectivity of the candidate area); for each of the plurality of potential layouts, determining, … a proposed layout score based on the encoded image and the encoded proposed layout ([0051] At step 406, control circuitry 504 assigns a respective score to each of the determined candidate area characteristics; [0070] FIG. 8A illustrates a data structure 801 corresponding to a first content to be projected (e.g., “The Dark Knight”) which includes a listing of the candidate areas 802, 804, 806 and a corresponding quality-of-projection indicator 808, 810, 812 assigned to the candidate area; [0071] FIG. 8B illustrates a data structure 803 corresponding to a second content to be projected (e.g., “Fortnite”) which includes a listing of the candidate areas 814, 816, 818, 820 and a corresponding quality-of-projection indicator 808, 810, 812 assigned to the candidate areas); and selecting the layout with the highest proposed layout score ([0055] At step 414, control circuitry 504 generates the respective quality-of-projection indicator by adding the combined characteristic scores and selects the candidate area with the highest quality-of-projection indicator as the target area on which the content is to be projected). Singh does not expressly teach ... by the trained neural network. However, Zavesky teaches by the trained neural network ([0065] aspects of machine learning, e.g., including deep neural networks, may be utilized by the target surface understanding module 213 to facilitate estimations of understandings of physical target surfaces from image and/or other sensor data; [0082] The surface evaluation module 212 may analyze a surface for projection and/or display characteristics, e.g., using target surface for visualizations. Without limitation, such analysis may include any of the examples disclosed herein, including computer vision (CV) techniques; [0063] the target surface understanding module 213 may apply one or more fabric simulation techniques, e.g., evaluating types of textiles or fabric). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of combination of Cederlof, RAMON and Singh to incorporate the step/system of using deep neural networks taught by Zavesky. Motivation for this combination has been stated in claim 9. With respect to claim 21, arguments analogous to those presented for claim 9, are applicable. With respect to claim 23, arguments analogous to those presented for claim 9, are applicable. Allowable Subject Matter Claim 11-13 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Wright et al. (U.S. Publication No. 2012/0301031) teaches “dividing the region of the surface that is available to project one or more content items onto into a plurality of subregions by defining a size, a shape, and a location of each subregion in the region of the surface” ([0083] region segmenter 615 to segment a synthetic image determined by the example synthetic image subregion encoder 610 into a plurality of regions; [0059] the example synthetic image creator 420 divides the input captured image into a plurality of subregions for analysis … After dividing the input captured image into the plurality of subregions, the example synthetic image creator 420 examines each subregion of the input captured image and encodes one or more properties of the examined subregion into a respective subregion of a synthetic image; [0061] To determine one or more regions of an input captured image that correspond to primary media content included in the monitored secondary media content presentation, the example image pre-processor 405 includes an example computer vision processor 425; [0062] After segmenting the synthetic image, the example computer vision processor 425 operates to select one or more of the segmented regions that correspond to the primary media content included in the monitored secondary media content presentation represented by the synthetic image; [0026] employ computer vision techniques to segment video images representative of the secondary (e.g., post-production) media content presentation into multiple regions; [0030] the media content monitoring implementation may examine the segmented synthetic image for edges defining regions having some minimum size and certain aspect ratio; [0058] The synthetic images encode properties of the input captured images that can aid in region segmentation and selection) and “determining a layout for projecting each of the one or more content items onto one of the plurality of subregions by evaluating, based at least on the size and shape of each subregion, a plurality of potential layouts projecting the one or more content items onto the plurality of subregions” ([0064] the computer vision processor 425 may examine the selected region of the segmented image to determine whether its shape corresponds to a predefined shape consistent with a display of primary media content, such as rectangular shape having a minimum size and an aspect ratio of 4:3 or 16:9 consistent with a typical movie, television program or commercial display; [0065] Although the example computer vision processor 425 has been described in the context of processing synthetic images determined by the example synthetic image creator 420, the example computer vision processor 425 can additionally or alternatively segment and select region(s) from the raw captured images corresponding to the monitored secondary media content presentation instead; [0070] the computer vision processor 425 is configured to output information describing the location and size of the selected region of the synthetic image corresponding to the primary media content; [0084] The example region selector 620 is included in the example computer vision processor 425 of FIG. 6 to select one or more of the segmented regions of the synthetic image that correspond to the primary media content included in the monitored secondary media content presentation represented by the synthetic image ... The example region selector 620 then selects the region(s) exhibiting substantially non-uniform variation and that are consistent with a primary media content display (e.g., such as being at least a certain minimum rectangular size with an aspect ratio of 4:3 or 16:9) as corresponding to the primary media content included in the monitored secondary media content presentation). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL C. CHANG whose telephone number is (571)270-1277. The examiner can normally be reached Monday-Thursday and Alternate Fridays 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan S. Park can be reached at (571) 272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL C CHANG/Examiner, Art Unit 2669 /CHAN S PARK/Supervisory Patent Examiner, Art Unit 2669
Read full office action

Prosecution Timeline

Dec 03, 2023
Application Filed
Nov 15, 2025
Non-Final Rejection — §103
Dec 16, 2025
Interview Requested
Jan 14, 2026
Applicant Interview (Telephonic)
Jan 14, 2026
Examiner Interview Summary
Jan 30, 2026
Response Filed
Mar 25, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592097
REAL-TIME, FINE-RESOLUTION HUMAN INTRA-GAIT PATTERN RECOGNITION BASED ON DEEP LEARNING MODELS
2y 5m to grant Granted Mar 31, 2026
Patent 12579672
STEREO VISION-BASED HEIGHT CLEARANCE DETECTION
2y 5m to grant Granted Mar 17, 2026
Patent 12573047
Control Method, Device, Equipment and Storage Medium for Interactive Reproduction of Target Object
2y 5m to grant Granted Mar 10, 2026
Patent 12548296
Spatially Preserving Flattening in Deep Learning Neural Networks
2y 5m to grant Granted Feb 10, 2026
Patent 12541868
Image Registration Method and Apparatus, Electronic Apparatus, and Storage Medium
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+11.7%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 132 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month