Prosecution Insights
Last updated: April 19, 2026
Application No. 18/477,389

IMAGING PRIVACY FILTER FOR OBJECTS OF INTEREST IN HARDWARE FIRMWARE PLATFORM

Non-Final OA §102§103
Filed
Sep 28, 2023
Examiner
HAUSMANN, MICHELLE M
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Ati Technologies Ulc
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
98%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
658 granted / 863 resolved
+14.2% vs TC avg
Strong +22% interview lift
Without
With
+21.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
23 currently pending
Career history
886
Total Applications
across all art units

Statute-Specific Performance

§101
14.6%
-25.4% vs TC avg
§103
61.2%
+21.2% vs TC avg
§102
5.7%
-34.3% vs TC avg
§112
10.1%
-29.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 863 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 7, 10, 15, and 17 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Nicholson et al. (US 20230306610 A1). Regarding claims 1 and 10, Nicholson et al. disclose a method of filtering objects of interest of images captured at a computing device (curate the background environment behind the first user, as captured within a field of view of a camera, [0023]), the method comprising; and computing device for filtering objects of interest of images comprising: an image capturing device (curate the background environment behind the first user, as captured within a field of view of a camera, [0023]); memory configured to store objects of interest and a processor configured to, for an image captured by the image capturing device (abstract, [0004], [0029], [0036]): for a captured image (camera, image, video, [0023]): determining one or more regions of interest in the captured image based on one or more objects of interest (The system may determine, either autonomously or through user input, which portions of the background environment to leave as is and which portions of the background environment to modify. Thus, the system enables a user to select, on an object-by-object basis, which objects the user would like to conceal from the outgoing image data without requiring the user to blur or replace the entire background environment with a stock background, [0022], The term “object” as used herein can refer to items, architectural elements, areas of a space defined between items and/or architectural elements, and/or the like. The items may include books, photographs, electronic devices, papers, posters, paintings, signs, light fixtures, lamps, furniture, and the like. The architectural elements may include windows, doorframes, walls, shelves, and/or the like. The areas between items and/or architectural elements may include a hallway and/or a doorway that shows a portion of another room visible in the background environment, and the like, [0024], After the segmentation of the input image data, the controller 102 may identify a set of one or more of the objects 310</b> in the background environment 304 to modify. For example, some of the objects 310</b> may be deemed by the user as too personal or private, or simply not appropriate for the tenor of the video conference call. The set of objects 310</b> may be identified in order to conceal those objects 310</b> from view by persons that view the video stream showing the room 308, [0043], FIG. 4 illustrates an annotated image 400 that is generated by the controller 102 of the image alteration system 100 according to an embodiment. In an embodiment, the controller 102 prompts the user to select the set of objects 310</b> to modify, The graphic indicators 402 indicate the objects 310</b> in the background environment 304 that were located via the image analysis and segmentation process. The indicators 402 are illustrated in FIG. 4 as ovals or ellipses positioned to overlap and/or surround the corresponding objects 310</b>. The ovals or ellipses may have unfilled or transparent interior areas to enable the user to see which object 310</b> is surrounded and/or overlapped by each indicator 402. In other embodiments, the indicators 402 may have a different shape or characteristic. For example, the indicators 402 may be closed shapes that outline the objects 310</b> by extending along the perimeter contours of the objects 310</b>, [0044], In an embodiment, the user may select, via the input device 108, one or more of the graphic indicators 402 and/or objects 310</b> to set a status for the selected objects 310</b> as “keep” or “modify”. If the user desires to conceal a first object 310</b>, then the user may control the input device 108 to provide a user input selection of the graphic indicator 402 associated with the first object 310</b>, [0045]); and modifying the captured image for display based on the determined one or more regions of interest, wherein the captured image is displayed without the one or more objects of interest being viewable (The first user may modify objects that are personal or private by choosing to blur, remove, or replace those objects, For example, if the first user selects to blur a family photo, then the video stream showing the first user that is received and displayed on the remote computer devices during the conference call shows a blurred image of the family photo without blurring other portions of the background surrounding the family photo, [0023], With respect to the blurring mode, the controller 102 may blur an object 310</b> by obscuring the appearance of the object 310</b>. The object 310</b> may be obscured by reducing the pixel resolution, modifying some of the pixels that depict the object 310</b>, or superimposing a stock blur image over the pixels that depict the object 310</b>, [0050]). Regarding claims 7 and 15, Nicholson et al. disclose the method and device of claims 1 and 10. Nicholson et al. further indicate determining the one or more regions of interest by performing inference processing, using a neural network, on the captured image (In an embodiment, the controller 102 attempts to recognize and identify the type or class of the objects 310 in the image data during the image segmentation. The controller 102 may use machine learning (e.g., artificial intelligence). For example, the one or more processors 112 may include an artificial neural network 122 (shown in FIG. 1), [0040], The neurons in the layers of the neural network 122 may examine characteristics of the pixels of the input image, such as the intensities, colors, or the like, to determine the classification vectors for the various pixels. The neural network 122 may assign or associate different pixels with different object classes based on the characteristics of the pixels. An object class is a type or category of an object 310 appearing in the image 300. For example, a window can be a first object class, a wall can be a different, second object class, a framed painting or photograph can be a third object class, and objects on shelves, such as books, can be a fourth object class, [0041]) [identification, classification are interpreted as inferencing]. Regarding claim 17, Nicholson et al. disclose the computing device of claim 10. Nicholson et al. further indicate the modified image is displayed at the display (abstract, [0023], [0044], [0062]). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2-5 and 11-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nicholson et al. (US 20230306610 A1) as applied to claims 1 and 10, further in view of Tang et al. (US 20220058394 A1). Regarding claims 2 and 11, Nicholson et al. disclose the method and device of claims 1 and 10. Nicholson et al. further disclose identifying an application executing on the computing device as a video conferencing application: in response to the identification of the application executing on the computing device, selecting the one or more objects of interest from a stored list of objects of interest; and determining the one or more regions of interest in the captured image based on the selected one or more objects of interest (Optionally, the one or more processors may be configured to identify the set of one or more objects to modify based on designated settings of a user profile stored on the memory or another data storage device that is operably connected to the one or more processors, [0008], In an embodiment, the controller 102 may display the annotated image 400 during a set-up or preview stage before the video feed generated by the camera 104 is remotely transmitted. For example, upon initiating a video conference application on the user computer device 202, the controller 102 may present the annotated image 400 to the user prior to joining the meeting. In another example, the user may select, via the input device 108, to update user settings. In response to the selection to update user settings, the controller 102 may display the annotated image 400 to enable the user to set the status of the objects 310. Once the statuses for the objects 310 are set, the controller 102 may save the object statuses as a user profile. For a subsequent streaming event (e.g., video conference call), the user may simply select the user profile from a list of profiles, and the controller 102 accesses the object statues from the user database 118 of the memory 114. The user may set multiple different profiles which have different objects 310 modified and/or the same objects 310 modified but in a different way. For example, the user may set one profile for work video conference meetings, a second profile for video conferences with extended family, and a third profile for video conferences with friends. The image alteration system 100 enables the user to curate the particular objects 310 shown in the background environment 304 for each of the different types of video streaming events. The profiles represent short cuts that allows the user to quickly access a pre-selected ensemble of object appearance changes, without setting individual object statuses. For example, if the controller 102 receives a command to implement a specific user profile, then after segmenting the input image data, the controller 102 identifies the set of one or more objects 310 to modify based on the designated settings in the selected user profile., [0046], The stock objects in the library 120 may be categorized based on type, size, and/or the like. The user may select the stock object to replace the given object 310 via the input device 108, [0054], The set of objects 310 that are modified may differ for different audiences of the streaming event. For example, a “work” profile may replace a family picture with a stock inspirational poster, may replace family photo albums with stock images of textbooks, and/or the like, [0056], Optionally, the set of objects to modify may be based on designated settings of a user profile and/or user input selections generated via an input device 108. The user profile may be stored in a memory device, such as the memory 114 of the controller 102, [0061], The image alteration system and method described herein allows the user to create a physical background that is customized for the audience of the video streaming event, such as a video conference call. The customization may be integrated into the existing background without universally blurring or replacing the background, [0065]). It would have been obvious at the time of filing to one of ordinary skill in the art the reference is “selecting the one or more objects of interest from a stored list of objects of interest”, as there are stored profiles with which objects to replace, and replacement objects are stored in a library, therefore together these teach the limitation. It would have been obvious at the time of filing to one of ordinary skill in the art the reference is “determining the one or more regions of interest in the captured image based on the selected one or more objects of interest”, as the reference teaches determining which objects to replace, and the replacement area shape is based on the object that is replaced. Another reference is added however for more explicit teaching of these limitations. Tang et al. teach in response to the identification of the application executing on the computing device, selecting the one or more objects of interest from a stored list of objects of interest (In some embodiments, the cloud service 202 may be configured to provide resources such as training data and/or a database of feature maps (e.g., feature maps of recognized objects that may be used as a basis to perform object recognition and/or classification), [0118], In some embodiments, the settings may be stored in the cloud service 202 as the event settings storage 210 (e.g., using a secured account). The signal PREFS may comprise the objects and/or events of interest selected by the user. In one example, the signal PREFS may enable the user to select people and animals as the objects and/or events of interest, [0123]) and determining the one or more regions of interest in the captured image based on the selected one or more objects of interest (In the example shown, the distortion effect 392 may have a circular shape. In some embodiments, the distortion effect 392 may be intelligently selected to have a shape that corresponds to the shape of the body part and/or face that may obscured. For example, the processor 102 may be configured to identify the shape of the face 378 based on the characteristics of the pixels (e.g., an arrangement of similar colors) to determine the shape for the distortion effect 392. In another example, the processor 102 may be configured to apply the distortion effect 392 to a randomly selected area around the face 378 (or other body parts) to help conceal identifying features of the family member 354 (e.g., conceal a body shape or clothes worn that might be used to identify the family member 354). In one example, the distortion effect 392 may be a mask (e.g., a colored mask overlaid on top of the face of the family member). In another example, the distortion effect 392 may be a blur effect. In yet another example, the distortion effect 392 may be a mosaic effect. In still another example, the distortion effect 392 may comprise cropping and/or removing pixels (e.g., replacing with null data or random data). In yet another example, the distortion effect 392 may comprise replacing the face 378 with an alternate graphic. The type of the distortion effect 392 applied may be varied according to the design criteria of a particular implementation, [0205]). Nicholson et al. and Tang et al. are in the same art of object identification (Nicholson et al., [0008]; Tang et al., [0118]). The combination of Tang et al. with Nicholson et al. will allow for determining the one or more regions of interest in the captured image based on the selected one or more objects of interest. It would have been obvious at the time of filing to combine the determining of Tang et al. with the invention of Nicholson et al. as this was known at the time of filing, the combination would have predictable results, and as Tang et al. indicate, “It would be desirable to implement a person-of-interest centric timelapse video with AI input on home security camera to protect privacy” ([0006]), “The edge AI home security device/camera may be configured to implement artificial intelligence (AI) technology. Using AI technology, the edge AI camera may be a more powerful (e.g., by providing relevant data for the user) and a more power efficient solution than using a cloud server in many aspects” ([0026]), “Implementing various functionality of the processor 102 using the dedicated hardware modules 190a-190n may enable the processor 102 to be highly optimized and/or customized to limit power consumption, reduce heat generation and/or increase processing speed compared to software implementations” ([0099]) thereby providing privacy, efficiency, and customizability advantages to the combination of inventions. Regarding claim 3, Nicholson et al. and Tang et al. disclose the method of claim 2. Tang et al. further teach the stored list of objects of interest are stored in a secure portion of memory which is not accessible by an operating system of the computing device (“In an example, the companion app implemented on the remote devices 204a-204n may enable the end users to adjust various settings for the camera systems 100a-100n and/or the video captured by the camera systems 100a-100n. In some embodiments, the settings may be stored in the cloud service 202 as part of the event settings storage 210 (e.g., using a secured account). However, in some embodiments, to ensure privacy protection, the settings of the signal IPREFS may instead avoid communication to/from the cloud service 202. For example, a direct connection and/or a communication that does not transfer data to the cloud service 202 may be established between one or more of the remote devices 204a-204n and the edge AI camera 100i. The signal IPREFS may comprise the faces and/or identities of various people that may be selected by the user. The signal IPREFS may enable the user to select people (e.g., faces) as privacy events. In one example, the signal IPREFS may enable the user to select people (e.g., faces) to enable the processor 102 to distinguish between people that are considered privacy events and people that are not considered privacy events. Generally, the data from the signal IPREFS may not be stored in the cloud services 202”, [0132], For example, there may be no concern of leaking family privacy information (e.g., video and/or images of family members and/or the behavior of family members) because the faces of the family members may be enrolled locally using the app on the remote devices 204a-204n and the feature set IFEAT generated from the enrolled faces may be sent via a local network rather than through the cloud service 202. The data about the events and/or objects of interest may be routed through the cloud service 202, but the family privacy information may never be uploaded to the cloud service 202., [0135]). Regarding claim 4, Nicholson et al. and Tang et al. disclose the method of claim 2. Nicholson et al. and Tang et al. further indicate the stored list of objects of interest comprise one or more nonviewable objects of interest, and the captured image is modified to prevent the one or more nonviewable objects of interest, in the stored list of objects of interest, from being viewable in the captured image (Nicholson et al., Thus, the system enables a user to select, on an object-by-object basis, which objects the user would like to conceal from the outgoing image data without requiring the user to blur or replace the entire background environment with a stock background. The image data produced by the system may resemble the user's actual room or space except for the select objects modified. The system can successfully conceal personal and private aspects visible in the background of a camera view, without substantially changing the image aesthetics, [0022], The first user may modify objects that are personal or private by choosing to blur, remove, or replace those objects., [0023], After the segmentation of the input image data, the controller 102 may identify a set of one or more of the objects 310 in the background environment 304 to modify. For example, some of the objects 310 may be deemed by the user as too personal or private, or simply not appropriate for the tenor of the video conference call. The set of objects 310 may be identified in order to conceal those objects 310 from view by persons that view the video stream showing the room 308., [0043], The profiles represent short cuts that allows the user to quickly access a pre-selected ensemble of object appearance changes, without setting individual object statuses. For example, if the controller 102 receives a command to implement a specific user profile, then after segmenting the input image data, the controller 102 identifies the set of one or more objects 310 to modify based on the designated settings in the selected user profile., [0046], The set of objects 310 that are modified may differ for different audiences of the streaming event. For example, a “work” profile may replace a family picture with a stock inspirational poster, may replace family photo albums with stock images of textbooks, and/or the like, [0056]; Tang et al., Embodiments of the present invention may be configured to protect the privacy of particular people when the smart timelapse video is generated. The video content (e.g., what appears in the smart timelapse video) may be automatically adjusted in response to the objects/events detected. Particular objects/events may be shown as captured in the smart timelapse video and other objects/events may be excluded and/or removed from the smart timelapse video stream. For example, the smart timelapse video stream may be generated to include the faces and behaviors of strangers but exclude the faces and behaviors of family members. Generally, the faces and/or behaviors excluded from the smart timelapse video stream may correspond to privacy concerns (e.g., identifying particular people, a person being uncomfortable being on video, preventing the storage of potentially embarrassing behaviors, etc.). The criteria for including or excluding video content may be varied according to the design criteria of a particular implementation, [0025], The signal IPREFS may be communicated via a local network in order to protect a privacy of people and/or faces of people that may be communicated (e.g., to generate feature set data)., [0131], The signal IPREFS may enable the user to select people (e.g., faces) as privacy events. In one example, the signal IPREFS may enable the user to select people (e.g., faces) to enable the processor 102 to distinguish between people that are considered privacy events and people that are not considered privacy events, [0132]). Regarding claim 5, Nicholson et al. and Tang et al. disclose the method of claim 4. Nicholson et al. and Tang et al. further indicate modifying the captured image to prevent the one or more nonviewable objects of interest from being viewable comprises blurring the one or more nonviewable objects of interest, blacking out the one or more nonviewable objects of interest, or distorting the one or more nonviewable objects of interest (Nicholson et al., blur objects, [0023], [0050]; Tang et al., blur face, [0023], apply distortion, [0034], [0203]). Regarding claim 12, Nicholson et al. and Tang et al. disclose the device of claim 10. Tang et al. further teach the objects of interest are stored in a secure portion of the memory which is not accessible by a non-secure operating system of the computing device (“In an example, the companion app implemented on the remote devices 204a-204n may enable the end users to adjust various settings for the camera systems 100a-100n and/or the video captured by the camera systems 100a-100n. In some embodiments, the settings may be stored in the cloud service 202 as part of the event settings storage 210 (e.g., using a secured account). However, in some embodiments, to ensure privacy protection, the settings of the signal IPREFS may instead avoid communication to/from the cloud service 202. For example, a direct connection and/or a communication that does not transfer data to the cloud service 202 may be established between one or more of the remote devices 204a-204n and the edge AI camera 100i. The signal IPREFS may comprise the faces and/or identities of various people that may be selected by the user. The signal IPREFS may enable the user to select people (e.g., faces) as privacy events. In one example, the signal IPREFS may enable the user to select people (e.g., faces) to enable the processor 102 to distinguish between people that are considered privacy events and people that are not considered privacy events. Generally, the data from the signal IPREFS may not be stored in the cloud services 202”, [0132], For example, there may be no concern of leaking family privacy information (e.g., video and/or images of family members and/or the behavior of family members) because the faces of the family members may be enrolled locally using the app on the remote devices 204a-204n and the feature set IFEAT generated from the enrolled faces may be sent via a local network rather than through the cloud service 202. The data about the events and/or objects of interest may be routed through the cloud service 202, but the family privacy information may never be uploaded to the cloud service 202, [0135]). Regarding claim 13, Nicholson et al. and Tang et al. disclose the device of claim 10. Nicholson et al. and Tang et al. further indicate the objects of interest comprise one or more nonviewable objects of interest, and the processor is configured to: select the one or more nonviewable objects of interest; and modify the image by preventing the one or more nonviewable objects of interest from being viewable in the image (Nicholson et al., Thus, the system enables a user to select, on an object-by-object basis, which objects the user would like to conceal from the outgoing image data without requiring the user to blur or replace the entire background environment with a stock background. The image data produced by the system may resemble the user's actual room or space except for the select objects modified. The system can successfully conceal personal and private aspects visible in the background of a camera view, without substantially changing the image aesthetics, [0022], The first user may modify objects that are personal or private by choosing to blur, remove, or replace those objects., [0023], After the segmentation of the input image data, the controller 102 may identify a set of one or more of the objects 310 in the background environment 304 to modify. For example, some of the objects 310 may be deemed by the user as too personal or private, or simply not appropriate for the tenor of the video conference call. The set of objects 310 may be identified in order to conceal those objects 310 from view by persons that view the video stream showing the room 308., [0043], The profiles represent short cuts that allows the user to quickly access a pre-selected ensemble of object appearance changes, without setting individual object statuses. For example, if the controller 102 receives a command to implement a specific user profile, then after segmenting the input image data, the controller 102 identifies the set of one or more objects 310 to modify based on the designated settings in the selected user profile., [0046], The set of objects 310 that are modified may differ for different audiences of the streaming event. For example, a “work” profile may replace a family picture with a stock inspirational poster, may replace family photo albums with stock images of textbooks, and/or the like, [0056]; Tang et al., Embodiments of the present invention may be configured to protect the privacy of particular people when the smart timelapse video is generated. The video content (e.g., what appears in the smart timelapse video) may be automatically adjusted in response to the objects/events detected. Particular objects/events may be shown as captured in the smart timelapse video and other objects/events may be excluded and/or removed from the smart timelapse video stream. For example, the smart timelapse video stream may be generated to include the faces and behaviors of strangers but exclude the faces and behaviors of family members. Generally, the faces and/or behaviors excluded from the smart timelapse video stream may correspond to privacy concerns (e.g., identifying particular people, a person being uncomfortable being on video, preventing the storage of potentially embarrassing behaviors, etc.). The criteria for including or excluding video content may be varied according to the design criteria of a particular implementation, [0025], The signal IPREFS may be communicated via a local network in order to protect a privacy of people and/or faces of people that may be communicated (e.g., to generate feature set data)., [0131], The signal IPREFS may enable the user to select people (e.g., faces) as privacy events. In one example, the signal IPREFS may enable the user to select people (e.g., faces) to enable the processor 102 to distinguish between people that are considered privacy events and people that are not considered privacy events, [0132]). Claim(s) 6 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nicholson et al. (US 20230306610 A1) and Tang et al. (US 20220058394 A1) as applied to claims 4 and 13, further in view of Kurtz et al. (US 20080297587 A1). Regarding claims 6 and 14, Nicholson et al. and Tang et al. disclose the method and device of claims 4 and 13. Nicholson et al. and Tang et al. also indicate the stored list of objects of interest further comprise one or more viewable objects of interest (Nicholson et al., the one or more processors may be configured to identify the set of one or more objects to modify based on designated settings of a user profile stored on the memory or another data storage device that is operably connected to the one or more processors, [0008], In an embodiment, the controller 102 may display the annotated image 400 during a set-up or preview stage before the video feed generated by the camera 104 is remotely transmitted. For example, upon initiating a video conference application on the user computer device 202, the controller 102 may present the annotated image 400 to the user prior to joining the meeting. In another example, the user may select, via the input device 108, to update user settings. In response to the selection to update user settings, the controller 102 may display the annotated image 400 to enable the user to set the status of the objects 310. Once the statuses for the objects 310 are set, the controller 102 may save the object statuses as a user profile. For a subsequent streaming event (e.g., video conference call), the user may simply select the user profile from a list of profiles, and the controller 102 accesses the object statues from the user database 118 of the memory 114. The user may set multiple different profiles which have different objects 310 modified and/or the same objects 310 modified but in a different way. For example, the user may set one profile for work video conference meetings, a second profile for video conferences with extended family, and a third profile for video conferences with friends. The image alteration system 100 enables the user to curate the particular objects 310 shown in the background environment 304 for each of the different types of video streaming events. The profiles represent short cuts that allows the user to quickly access a pre-selected ensemble of object appearance changes, without setting individual object statuses. For example, if the controller 102 receives a command to implement a specific user profile, then after segmenting the input image data, the controller 102 identifies the set of one or more objects 310 to modify based on the designated settings in the selected user profile., [0046], The stock objects in the library 120 may be categorized based on type, size, and/or the like. The user may select the stock object to replace the given object 310 via the input device 108, [0054], The set of objects 310 that are modified may differ for different audiences of the streaming event. For example, a “work” profile may replace a family picture with a stock inspirational poster, may replace family photo albums with stock images of textbooks, and/or the like, [0056], Optionally, the set of objects to modify may be based on designated settings of a user profile and/or user input selections generated via an input device 108. The user profile may be stored in a memory device, such as the memory 114 of the controller 102, [0061], The image alteration system and method described herein allows the user to create a physical background that is customized for the audience of the video streaming event, such as a video conference call. The customization may be integrated into the existing background without universally blurring or replacing the background, [0065]; Tang et al., In some embodiments, the cloud service 202 may be configured to provide resources such as training data and/or a database of feature maps (e.g., feature maps of recognized objects that may be used as a basis to perform object recognition and/or classification), [0118], In some embodiments, the settings may be stored in the cloud service 202 as the event settings storage 210 (e.g., using a secured account). The signal PREFS may comprise the objects and/or events of interest selected by the user. In one example, the signal PREFS may enable the user to select people and animals as the objects and/or events of interest, [0123]). Nicholson et al. and Tang et al. do not explicitly disclose and the captured image is modified to prevent the one or more nonviewable objects of interest from being viewable in the captured image by cropping the captured image to include the one or more viewable objects of interest without the one or more nonviewable objects of interest. Kurtz et al. teach the captured image is modified to prevent the one or more nonviewable objects of interest from being viewable in the captured image by cropping the captured image to include the one or more viewable objects of interest without the one or more nonviewable objects of interest (Although users 10 may define image areas 422 for exclusion from video capture for various reasons, maintenance of personal or family privacy is likely the key motivator. As shown in FIG. 4A, an image capture device 120 (the WFOV camera) has a portion of its image field of view 420, indicated by image area 422, modified, for example, by cropping image area 422 out of the captured image before image transmission across network 360 to a remote site 364. The local user 10 can utilize the privacy interface 400 and the contextual interface 450 to establish human perceptible modifications to a privacy sensitive image area 422, [0066], For example, a privacy sensitive image area 422 may simply be cropped out of the captured images. Alternately, an image area 422 can be modified or obscured with other visual effects, such as distorting, blurring (lowering resolution), or shading (reducing brightness or contrast). For example, the shading can be applied as a gradient, to simulate a natural illumination fall-off. Device supplied scene analysis rules can be used to recommend obscuration effects, [0067], As another circumstance typical of the residential setting, it can be anticipated that children or pets or neighbors can wander into the capture field of view during a communication event. In particular, in such environments, it is not uncommon to have unclothed children wandering about the residence in unpresentable forms of attire. The contextual interface 450 can quickly recognize this and direct the image processor 320 to blur or crop out imagery of privacy sensitive areas. Indeed, the default settings in the privacy interface 400 may require such blurring or cropping, [0084]). Nicholson et al. and Kurtz et al. are in the same art of video-conferencing (Nicholson et al., [0001]; Kurtz et al., [0043]). The combination of Kurtz et al. with Nicholson et al. and Tang et al. will allow for cropping the image. It would have been obvious at the time of filing to combine the cropping of Kurtz et al. with the invention of Nicholson et al. and Tang et al. as this was known at the time of filing, the combination would have predictable results, and as Kurtz et al. indicate, “This video communication system is particularly intended for use in the residential environment, where a variety of factors, such as variable conditions and participants, ease of use, privacy concerns, and system cost, are highly relevant” ([0003]) and “maintenance of personal or family privacy is likely the key motivator” ([0066]) this provides a privacy motivation to the combination of inventions. Claim(s) 8 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nicholson et al. (US 20230306610 A1). Regarding claims 8 and 16, Nicholson et al. disclose the method and device of claims 7 and 15. Nicholson et al. further indicate providing, as inputs to the neural network, image data representing the captured image and image data representing the one or more objects of interest; and identifying the one or more regions of interest as comprising the one or more objects of interest (In an embodiment, the controller 102 attempts to recognize and identify the type or class of the objects 310 in the image data during the image segmentation. The controller 102 may use machine learning (e.g., artificial intelligence). For example, the one or more processors 112 may include an artificial neural network 122 (shown in FIG. 1), [0040], The neurons in the layers of the neural network 122 may examine characteristics of the pixels of the input image, such as the intensities, colors, or the like, to determine the classification vectors for the various pixels. The neural network 122 may assign or associate different pixels with different object classes based on the characteristics of the pixels. An object class is a type or category of an object 310 appearing in the image 300. For example, a window can be a first object class, a wall can be a different, second object class, a framed painting or photograph can be a third object class, and objects on shelves, such as books, can be a fourth object class, [0041]). It would have been obvious at the time of filing that as the keep or modify areas are based on the objects detected, this object detection and classification can be interpreted as identifying the one or more regions of interest as comprising the one or more objects of interest. Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nicholson et al. (US 20230306610 A1) as applied to claim 1, further in view of Elron et al. (US 20220092400 A1). Regarding claim 9, Nicholson et al. disclose the method of claim 1. Nicholson et al. partly disclose identifying the one or more regions of interest as comprising the one or more objects of interest using a neural network, trained prior to runtime, to recognize the one or more objects of interest (The neural network 122 may be trained to perform image segmentation to identify different types of objects 310 in the background environment 304, [0040]), however another reference is added to make this explicit. Elron et al. teach identifying the one or more regions of interest as comprising the one or more objects of interest using a neural network, trained prior to runtime, to recognize the one or more objects of interest (A foreground-background classification (or segmentation) was obtained using a convolutional neural network (CNN). An image 104 shows the difference in classifications (between background and foreground) between the two consecutive frames 100 and 102. Those light areas that are non-zero (in difference indicating motion) in image 104 are a very small part of the image and indicate the noticeable differences between the two frames. An image 106 shows a shaded overlay 108 indicating the locations on the frame that a temporal predictor of the present method disabled turned off a main CNN by omitting layer operations for this area of the frame, [0025], neural network inferencing, [0049], “As a preliminary matter, process 600 may include “train neural networks” 602, and by one example, this is performed offline before a runtime. The main NN may be trained as by known methods and depending on the architecture and purpose of the NN. No significant changes to the training are needed to implement the main NN disclosed herein”, [0052], highly efficient neural network video image processing, semantic classifications when object segmentation is being performed, [0061]). Nicholson et al. and Elron et al. are in the same art of video-conferencing (Nicholson et al., [0001]; Elron et al., [0027] [0117]). The combination of Elron et al. with Nicholson et al. and Tang et al. will allow for using a neural network, trained prior to runtime, to recognize the one or more objects of interest. It would have been obvious at the time of filing to combine the training of Elron et al. with the invention of Nicholson et al. and Tang et al. as this was known at the time of filing, the combination would have predictable results, and as Elron et al. indicate this will allow for highly efficient neural network video image processing, semantic classifications when object segmentation is being performed ([0061]) providing an efficiency benefit to the combination of inventions. Claim(s) 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nicholson et al. (US 20230306610 A1) in view of Zhou et al. (US 20200151425 A1). Regarding claim 18, Nicholson et al. disclose a computing device for filtering objects of interest of images (curate the background environment behind the first user, as captured within a field of view of a camera, [0023]) comprising: an image capturing device (curate the background environment behind the first user, as captured within a field of view of a camera, [0023]); memory configured to store objects of interest (abstract, [0004], [0029], [0036]); and a first processor (one or more processors 112, [0029], [0060], For example, a server having a first processor, a network interface, and a storage device for storing code may store the program code for carrying out the operations and provide this code through its network interface via a network to a second device having a second processor for execution of the code on the second device, [0068]) configured to, for an image captured by the image capturing device (camera, image, video, [0023]), determine one or more regions of interest to be modified in an image captured by the image capturing device based on the one or more objects of interest (The system may determine, either autonomously or through user input, which portions of the background environment to leave as is and which portions of the background environment to modify. Thus, the system enables a user to select, on an object-by-object basis, which objects the user would like to conceal from the outgoing image data without requiring the user to blur or replace the entire background environment with a stock background, [0022], The term “object” as used herein can refer to items, architectural elements, areas of a space defined between items and/or architectural elements, and/or the like. The items may include books, photographs, electronic devices, papers, posters, paintings, signs, light fixtures, lamps, furniture, and the like. The architectural elements may include windows, doorframes, walls, shelves, and/or the like. The areas between items and/or architectural elements may include a hallway and/or a doorway that shows a portion of another room visible in the background environment, and the like, [0024], After the segmentation of the input image data, the controller 102 may identify a set of one or more of the objects 310</b> in the background environment 304 to modify. For example, some of the objects 310</b> may be deemed by the user as too personal or private, or simply not appropriate for the tenor of the video conference call. The set of objects 310</b> may be identified in order to conceal those objects 310</b> from view by persons that view the video stream showing the room 308, [0043], FIG. 4 illustrates an annotated image 400 that is generated by the controller 102 of the image alteration system 100 according to an embodiment. In an embodiment, the controller 102 prompts the user to select the set of objects 310</b> to modify, The graphic indicators 402 indicate the objects 310</b> in the background environment 304 that were located via the image analysis and segmentation process. The indicators 402 are illustrated in FIG. 4 as ovals or ellipses positioned to overlap and/or surround the corresponding objects 310</b>. The ovals or ellipses may have unfilled or transparent interior areas to enable the user to see which object 310</b> is surrounded and/or overlapped by each indicator 402. In other embodiments, the indicators 402 may have a different shape or characteristic. For example, the indicators 402 may be closed shapes that outline the objects 310</b> by extending along the perimeter contours of the objects 310</b>, [0044], In an embodiment, the user may select, via the input device 108, one or more of the graphic indicators 402 and/or objects 310</b> to set a status for the selected objects 310</b> as “keep” or “modify”. If the user desires to conceal a first object 310</b>, then the user may control the input device 108 to provide a user input selection of the graphic indicator 402 associated with the first object 310</b>, [0045]); and modify the image based on the one or more regions of interest determined by the first processor, wherein the image is displayed without the one or more objects of interest being viewable (The first user may modify objects that are personal or private by choosing to blur, remove, or replace those objects, For example, if the first user selects to blur a family photo, then the video stream showing the first user that is received and displayed on the remote computer devices during the conference call shows a blurred image of the family photo without blurring other portions of the background surrounding the family photo, [0023], With respect to the blurring mode, the controller 102 may blur an object 310</b> by obscuring the appearance of the object 310</b>. The object 310</b> may be obscured by reducing the pixel resolution, modifying some of the pixels that depict the object 310</b>, or superimposing a stock blur image over the pixels that depict the object 310</b>, [0050]). Nicholson et al. indicate one or more processors and a second processor, but do not explicitly disclose a second processor configured to, for the image captured by the image capturing device: convert the image for processing by the first processor. Zhou et al. teach a second processor configured to, for the image captured by the image capturing device: convert the image for processing by the first processor (Processing the infrared image and the speckle image by the first processing unit 130 means to correct the infrared image or the speckle image, and to remove an influence of internal and external parameters of the camera module 110 on the image, [0244], In response to the first processing unit sending the corrected infrared image and the corrected speckle image to the second processing unit in the first execution environment, the second processing unit may obtain the target infrared image according to the corrected infrared image, and obtain the target speckle image or the depth image according to the corrected speckle image. The second processing unit may perform the face detection according to the infrared image and the depth image, and the face detection may include the face recognition, the face matching, and the living-body detection, [0262]) [the “first processing unit” of Zhou interpreted as the claimed “second processor” and the “second processing unit” of Zhou interpreted as the claimed “first processor”]. Nicholson et al. and Zhou et al. are in the same art of region detection and using multiple processors (Nicholson et al., abstract, [0004], [0068]; Zhou et al., [0262]). The combination of Zhou et al. with Nicholson et al. will allow for a second processor to convert the image for processing by the first processor. It would have been obvious at the time of filing to combine the second processor of Zhou et al. with the invention of Nicholson et al. as this was known at the time of filing, the combination would have predictable results, and as Zhou et al. indicate, “The processing unit may process the images acquired by the camera. The processing unit is connected with the camera. Images acquired by the camera may be transmitted to the processing unit and then processing such as cutting, brightness adjustment, face detection, face recognition and the like may be performed by the processing unit. In this embodiment, the electro
Read full office action

Prosecution Timeline

Sep 28, 2023
Application Filed
Dec 09, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602775
INTERPOLATION OF MEDICAL IMAGES
2y 5m to grant Granted Apr 14, 2026
Patent 12602793
Systems and Methods for Predicting Object Location Within Images and for Analyzing the Images in the Predicted Location for Object Tracking
2y 5m to grant Granted Apr 14, 2026
Patent 12602949
SYSTEM AND METHOD FOR DETECTING HUMAN PRESENCE BASED ON DEPTH SENSING AND INERTIAL MEASUREMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12597261
OBJECT MOVEMENT BEHAVIOR LEARNING
2y 5m to grant Granted Apr 07, 2026
Patent 12597244
METHOD AND DEVICE FOR IMPROVING OBJECT RECOGNITION RATE OF SELF-DRIVING CAR
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
98%
With Interview (+21.6%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 863 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month