How to get all metadata into Git and What will the optimal release flow using sandboxes?
The current development and deployment is from sandbox to sandbox and the plan is to move to git based deployments.
What is the best way to get all the metadata of an existing org into Git(bitbucket) for the first time into a master branch?
What will be the optimal release flow, if people are developing or configuring out of single sandbox (I know this is not ideal)
deployment git ci
add a comment |
The current development and deployment is from sandbox to sandbox and the plan is to move to git based deployments.
What is the best way to get all the metadata of an existing org into Git(bitbucket) for the first time into a master branch?
What will be the optimal release flow, if people are developing or configuring out of single sandbox (I know this is not ideal)
deployment git ci
add a comment |
The current development and deployment is from sandbox to sandbox and the plan is to move to git based deployments.
What is the best way to get all the metadata of an existing org into Git(bitbucket) for the first time into a master branch?
What will be the optimal release flow, if people are developing or configuring out of single sandbox (I know this is not ideal)
deployment git ci
The current development and deployment is from sandbox to sandbox and the plan is to move to git based deployments.
What is the best way to get all the metadata of an existing org into Git(bitbucket) for the first time into a master branch?
What will be the optimal release flow, if people are developing or configuring out of single sandbox (I know this is not ideal)
deployment git ci
deployment git ci
asked Dec 14 at 15:03
ab0369
357
357
add a comment |
add a comment |
4 Answers
4
active
oldest
votes
What is the best way to get all the metadata of an existing org into Git (Bitbucket) for the first time into a master branch?
You need to generate a package.xml
to pull all the existing metadata that you're planning to manage in Git and execute a Metadata API retrieve operation. You can construct the package by hand, or use a tool to prepare one.
One option is to use the Eclipse Force.com IDE to build the package by selecting the components that you want. You can then pull them via Eclipse, or use Workbench to perform a clean retrieve in a different working environment. Then, commit the extracted results to Git.
What will be the optimal release flow, if people are developing or configuring out of single sandbox (I know this is not ideal)
There is unfortunately nothing optimal about that sandbox layout, although the diagram you included suggests that your workflow may actually be a little more sophisticated and include individual Developer sandboxes for your developers, with a single shared sandbox for declarative work.
Release flow and cadence is very organization- and process- specific.
My general recommendation is to ensure that Git branches and merges are being used in a best-process fashion. With multiple Dev sandboxes feeding a single Dev Pro/integration sandbox, I would be skeptical about having developers do direct deployments to integration.
You'll need to define an effective workflow here to ensure that developers pull and merge work from an integration branch corresponding to that sandbox (likely into their own feature or story branches) before pushing to the integration sandbox in order to avoid overwriting code. Declarative work could be pulled directly from integration and checked in on the integration branch.
Pull requests could then be made from the integration branch to a QA branch for deployment upwards, and ultimately to master
for release to production.
Even if you're not using Salesforce DX, you want to ensure that Git is defined as the source of truth for your org, not the contents of the sandboxes. The sandbox content should always be amenable to being replaced with a deployment from one of your main branches.
This is definitely an area where someone on your team needs to have the relevant expertise on managing a Git-based development flow - it's hard to use this kind of architecture operationally by just following instructions.
We want to pull all the metadata components (except - developer.salesforce.com/docs/atlas.en-us.api_meta.meta/…) that can be deployed into the master branch. Since we have limits (You can retrieve and deploy up to 10,000 files or 400 MB) trying to figure out the best process to get all the metadata into master branch, we have huge number of components like classes, triggers, report types, reports, vf pages, lightning components, validation rules, custom objects, fields, email alerts, wfrules, process flows etc. Planning to use sfdx cli mdapi retreive
– ab0369
Dec 14 at 15:19
1
If your org exceeds the limits of the Metadata API, you'll need to batch your retrieves by building multiplepackage.xml
files to extract specific groups of metadata components.
– David Reed
Dec 14 at 15:23
1
I have seen that whenmaster
grows over time as it's the ultimate source of truth, deploying frommaster
leads to an extended deployment time. From the process I usually follow is to deploy from arelease
branch which consists only of what is required for the release (sort of delta deployment approach), instead of overwriting Production with what's not needed. It though drills down to the process being followed varying from place to place.
– Jayant Das
Dec 14 at 15:37
@JayantDas It's also possible to do a delta deploy off a commit range onmaster
via something likegit diff HEAD~1 --name-only --diff-filter=ACM
. Either way, I agree with you that that is preferable most of the time.
– David Reed
Dec 14 at 16:05
1
@ab0369 It will work perfectly fine. I have used this approach in quite a few of my implementations
– Jayant Das
Dec 15 at 14:33
|
show 3 more comments
While I will recommend to move to Salesforce DX, but in the traditional model of release management, your master should always be a replica of what's in Production.
A typical flow that I have followed in the past is:
Create master as the gold copy. This should be your most recent stable copy of running code in Production. You will need to decide which metadata is important for you to version manage. E.g., if you are only looking to version control Apex, Triggers or you are also looking to version control say Roles, Profiles, etc. My approach to this is that what we build, is version controlled.
As for how to extract the metadata, you will need to use the Metadata API to extract it from Production and commit it to the repo. You can use Force.com IDE for this purpose. Take a look at this link for details.
- Create release branch from master. I have usually followed an approach here is to release only what's needed. This is because if you have too many components in the branch (say, copied over from master) which are not modified in the current release, will lead to a longer deployment time. While this depends on the approach if you want to release from master or release branch, but in general, you should only deploy what's required.
- Developers create feature branches from the release branch, build, test and create PRs to merge to the release branch
- Release your code from the release branch to Production
- Merge with master after release
There can be different ways to handle depending on the scale and choice, but typically this approach has worked well for me.
You will definitely need to take care of other aspects, viz., code reviews, code merging, etc. but as long as you can put the process in the place, this will work fine.
This approach also works if your developers are working out of a single sandbox, because every developer will create the feature branch and before it can be merged to the release branch and moved to upper environments, you can implement a review process to make sure the code being committed to the release branch is validated.
We are not ready for DX yet. we want to reach there but first we want to setup git based deployments. How to get all the metadata into a master branch since we have limits around time and number for components ?
– ab0369
Dec 14 at 15:11
I updated with details in the first bullet in my answer.
– Jayant Das
Dec 14 at 15:13
add a comment |
Disclaimer: I work for Gearset
I'm a little late to the party but hopefully I can add something of value - we've got a few largely product-agnostic whitepapers for download that might be useful to you, specifically:
Simplifying Salesforce release management: talks about developing a process from a sandbox to sandbox model to something more complex targeted at the maturity and size of your team.
Version control for Salesforce: this is the follow-up to it, and talks about adopting version control incrementally, again moving from sandbox to sandbox, through introducing version control, picking a branching strategy that suits your workflow, etc.
Adopting Salesforce DX: this one's a little more of a thought piece about where DX fits into this process going forwards, and how you can get the most from DX with teams of mixed technical backgrounds.
They get a little product specific at times, but they're largely agnostic - you can follow a similar approach regardless of the tools you choose. You can download all of the above without needing to give us any contact info etc. at:
https://gearset.com/resources#whitepapers
You can also use Gearset to directly compare and deploy to and from git repos in either the DX or Ant-style metadata formats. This is a really easy way to get that initial batch of metadata into your repo. There's a free 30 day trial etc., so you can do it in a few clicks - just pick a source org and a target bitbucket repo and branch, run a comparison, select all components, add a commit message and hit deploy, and your repo will be ready to go.
I agree with Bill that a subset of metadata makes sense as a starting point, but you're unlikely to get absolute consensus on what that subset should be. Gearset's default set of metadata sits at around 64 types at the moment, and we think that's a good starting point for a manual workflow. For a more automated flow that uses CI, we'd start by stripping that down further, getting your process working, then incrementally adding types as you move forward.
In terms of workflow, I'd echo some of the other posters - aim for a 1:1 mapping between feature branch and dev sandbox (or spin up scratch orgs as appropriate if you go the DX route). We then commonly see a staging / integration org that tracks your master branch, before a UAT environment then obviously prod. Master should always contain only code that you're happy to ship, and if it's broken / undeployable at any point then the priority should be to get that back into a shippable state asap. All merges to master should be via pull / merge request, and all deployments to staging / integration should be from master. From there, ideally you shouldn't have any changes being made directly to other environments - they should all flow from master.
If you do need to make changes directly into an environment like integration, I'd probably recommend setting up another environment for those changes, and then set up a job that monitors that environment and lets you pull changes back to a corresponding branch on a regular cadence. You can then follow the gitflow / PR model and merge that branch back into master on review. Gearset has a monitoring feature that'll notify you when an org has changed from its previous state, and push those changes to a branch in git with a couple of clicks, which would come in very handy here, but you can also set something like that up with Jenkins / TeamCity and the Salesforce CLI / Ant, for example - just a little legwork.
This is a fairly interesting topic to me, so if you want to talk about it feel free to open the on-site chat at gearset.com and leave a message - it'll come straight to the dev team and we're always happy to chat Salesforce DevOps :)
Hope that helps!
add a comment |
I don't recommend moving all of your Metadata into git. For many orgs, this is not practical, and the result is a gigantic repo. Use git for assets that need code merge -- like Apex Classes, Triggers, and Pages. Some other assets such as Lightning Bundles, Custom Objects, and Tabs are also good candidates for a repo.
But for the other 200 Metadata Types, use a Change and Release Management tool for comparing and merging live Salesforce orgs. Merge your code assets in the repo and move them into a Sandbox for integration testing. Otherwise you will end up with a massive XML comparison and merge challenge. And just because you merged some XML in a repo does NOT mean it can be deployed in an actual org.
New contributor
What metadata do you think is not a good option for repo?
– ab0369
Dec 15 at 21:52
1
Profiles, for example. They are gigantic and very complex. I would use a repo for assets that a human can merge, like Apex.
– Bill Appleton
Dec 16 at 23:35
Understood and will keep this in mind while implementing
– ab0369
Dec 17 at 3:54
1
I've worked on projects that do keep profiles in VCS. It is possible, but it takes a lot of discipline in constructing your commits and keeping environments refreshed. I prefer to VCS everything except profiles.
– David Reed
2 days ago
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "459"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsalesforce.stackexchange.com%2fquestions%2f242629%2fhow-to-get-all-metadata-into-git-and-what-will-the-optimal-release-flow-using-sa%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
What is the best way to get all the metadata of an existing org into Git (Bitbucket) for the first time into a master branch?
You need to generate a package.xml
to pull all the existing metadata that you're planning to manage in Git and execute a Metadata API retrieve operation. You can construct the package by hand, or use a tool to prepare one.
One option is to use the Eclipse Force.com IDE to build the package by selecting the components that you want. You can then pull them via Eclipse, or use Workbench to perform a clean retrieve in a different working environment. Then, commit the extracted results to Git.
What will be the optimal release flow, if people are developing or configuring out of single sandbox (I know this is not ideal)
There is unfortunately nothing optimal about that sandbox layout, although the diagram you included suggests that your workflow may actually be a little more sophisticated and include individual Developer sandboxes for your developers, with a single shared sandbox for declarative work.
Release flow and cadence is very organization- and process- specific.
My general recommendation is to ensure that Git branches and merges are being used in a best-process fashion. With multiple Dev sandboxes feeding a single Dev Pro/integration sandbox, I would be skeptical about having developers do direct deployments to integration.
You'll need to define an effective workflow here to ensure that developers pull and merge work from an integration branch corresponding to that sandbox (likely into their own feature or story branches) before pushing to the integration sandbox in order to avoid overwriting code. Declarative work could be pulled directly from integration and checked in on the integration branch.
Pull requests could then be made from the integration branch to a QA branch for deployment upwards, and ultimately to master
for release to production.
Even if you're not using Salesforce DX, you want to ensure that Git is defined as the source of truth for your org, not the contents of the sandboxes. The sandbox content should always be amenable to being replaced with a deployment from one of your main branches.
This is definitely an area where someone on your team needs to have the relevant expertise on managing a Git-based development flow - it's hard to use this kind of architecture operationally by just following instructions.
We want to pull all the metadata components (except - developer.salesforce.com/docs/atlas.en-us.api_meta.meta/…) that can be deployed into the master branch. Since we have limits (You can retrieve and deploy up to 10,000 files or 400 MB) trying to figure out the best process to get all the metadata into master branch, we have huge number of components like classes, triggers, report types, reports, vf pages, lightning components, validation rules, custom objects, fields, email alerts, wfrules, process flows etc. Planning to use sfdx cli mdapi retreive
– ab0369
Dec 14 at 15:19
1
If your org exceeds the limits of the Metadata API, you'll need to batch your retrieves by building multiplepackage.xml
files to extract specific groups of metadata components.
– David Reed
Dec 14 at 15:23
1
I have seen that whenmaster
grows over time as it's the ultimate source of truth, deploying frommaster
leads to an extended deployment time. From the process I usually follow is to deploy from arelease
branch which consists only of what is required for the release (sort of delta deployment approach), instead of overwriting Production with what's not needed. It though drills down to the process being followed varying from place to place.
– Jayant Das
Dec 14 at 15:37
@JayantDas It's also possible to do a delta deploy off a commit range onmaster
via something likegit diff HEAD~1 --name-only --diff-filter=ACM
. Either way, I agree with you that that is preferable most of the time.
– David Reed
Dec 14 at 16:05
1
@ab0369 It will work perfectly fine. I have used this approach in quite a few of my implementations
– Jayant Das
Dec 15 at 14:33
|
show 3 more comments
What is the best way to get all the metadata of an existing org into Git (Bitbucket) for the first time into a master branch?
You need to generate a package.xml
to pull all the existing metadata that you're planning to manage in Git and execute a Metadata API retrieve operation. You can construct the package by hand, or use a tool to prepare one.
One option is to use the Eclipse Force.com IDE to build the package by selecting the components that you want. You can then pull them via Eclipse, or use Workbench to perform a clean retrieve in a different working environment. Then, commit the extracted results to Git.
What will be the optimal release flow, if people are developing or configuring out of single sandbox (I know this is not ideal)
There is unfortunately nothing optimal about that sandbox layout, although the diagram you included suggests that your workflow may actually be a little more sophisticated and include individual Developer sandboxes for your developers, with a single shared sandbox for declarative work.
Release flow and cadence is very organization- and process- specific.
My general recommendation is to ensure that Git branches and merges are being used in a best-process fashion. With multiple Dev sandboxes feeding a single Dev Pro/integration sandbox, I would be skeptical about having developers do direct deployments to integration.
You'll need to define an effective workflow here to ensure that developers pull and merge work from an integration branch corresponding to that sandbox (likely into their own feature or story branches) before pushing to the integration sandbox in order to avoid overwriting code. Declarative work could be pulled directly from integration and checked in on the integration branch.
Pull requests could then be made from the integration branch to a QA branch for deployment upwards, and ultimately to master
for release to production.
Even if you're not using Salesforce DX, you want to ensure that Git is defined as the source of truth for your org, not the contents of the sandboxes. The sandbox content should always be amenable to being replaced with a deployment from one of your main branches.
This is definitely an area where someone on your team needs to have the relevant expertise on managing a Git-based development flow - it's hard to use this kind of architecture operationally by just following instructions.
We want to pull all the metadata components (except - developer.salesforce.com/docs/atlas.en-us.api_meta.meta/…) that can be deployed into the master branch. Since we have limits (You can retrieve and deploy up to 10,000 files or 400 MB) trying to figure out the best process to get all the metadata into master branch, we have huge number of components like classes, triggers, report types, reports, vf pages, lightning components, validation rules, custom objects, fields, email alerts, wfrules, process flows etc. Planning to use sfdx cli mdapi retreive
– ab0369
Dec 14 at 15:19
1
If your org exceeds the limits of the Metadata API, you'll need to batch your retrieves by building multiplepackage.xml
files to extract specific groups of metadata components.
– David Reed
Dec 14 at 15:23
1
I have seen that whenmaster
grows over time as it's the ultimate source of truth, deploying frommaster
leads to an extended deployment time. From the process I usually follow is to deploy from arelease
branch which consists only of what is required for the release (sort of delta deployment approach), instead of overwriting Production with what's not needed. It though drills down to the process being followed varying from place to place.
– Jayant Das
Dec 14 at 15:37
@JayantDas It's also possible to do a delta deploy off a commit range onmaster
via something likegit diff HEAD~1 --name-only --diff-filter=ACM
. Either way, I agree with you that that is preferable most of the time.
– David Reed
Dec 14 at 16:05
1
@ab0369 It will work perfectly fine. I have used this approach in quite a few of my implementations
– Jayant Das
Dec 15 at 14:33
|
show 3 more comments
What is the best way to get all the metadata of an existing org into Git (Bitbucket) for the first time into a master branch?
You need to generate a package.xml
to pull all the existing metadata that you're planning to manage in Git and execute a Metadata API retrieve operation. You can construct the package by hand, or use a tool to prepare one.
One option is to use the Eclipse Force.com IDE to build the package by selecting the components that you want. You can then pull them via Eclipse, or use Workbench to perform a clean retrieve in a different working environment. Then, commit the extracted results to Git.
What will be the optimal release flow, if people are developing or configuring out of single sandbox (I know this is not ideal)
There is unfortunately nothing optimal about that sandbox layout, although the diagram you included suggests that your workflow may actually be a little more sophisticated and include individual Developer sandboxes for your developers, with a single shared sandbox for declarative work.
Release flow and cadence is very organization- and process- specific.
My general recommendation is to ensure that Git branches and merges are being used in a best-process fashion. With multiple Dev sandboxes feeding a single Dev Pro/integration sandbox, I would be skeptical about having developers do direct deployments to integration.
You'll need to define an effective workflow here to ensure that developers pull and merge work from an integration branch corresponding to that sandbox (likely into their own feature or story branches) before pushing to the integration sandbox in order to avoid overwriting code. Declarative work could be pulled directly from integration and checked in on the integration branch.
Pull requests could then be made from the integration branch to a QA branch for deployment upwards, and ultimately to master
for release to production.
Even if you're not using Salesforce DX, you want to ensure that Git is defined as the source of truth for your org, not the contents of the sandboxes. The sandbox content should always be amenable to being replaced with a deployment from one of your main branches.
This is definitely an area where someone on your team needs to have the relevant expertise on managing a Git-based development flow - it's hard to use this kind of architecture operationally by just following instructions.
What is the best way to get all the metadata of an existing org into Git (Bitbucket) for the first time into a master branch?
You need to generate a package.xml
to pull all the existing metadata that you're planning to manage in Git and execute a Metadata API retrieve operation. You can construct the package by hand, or use a tool to prepare one.
One option is to use the Eclipse Force.com IDE to build the package by selecting the components that you want. You can then pull them via Eclipse, or use Workbench to perform a clean retrieve in a different working environment. Then, commit the extracted results to Git.
What will be the optimal release flow, if people are developing or configuring out of single sandbox (I know this is not ideal)
There is unfortunately nothing optimal about that sandbox layout, although the diagram you included suggests that your workflow may actually be a little more sophisticated and include individual Developer sandboxes for your developers, with a single shared sandbox for declarative work.
Release flow and cadence is very organization- and process- specific.
My general recommendation is to ensure that Git branches and merges are being used in a best-process fashion. With multiple Dev sandboxes feeding a single Dev Pro/integration sandbox, I would be skeptical about having developers do direct deployments to integration.
You'll need to define an effective workflow here to ensure that developers pull and merge work from an integration branch corresponding to that sandbox (likely into their own feature or story branches) before pushing to the integration sandbox in order to avoid overwriting code. Declarative work could be pulled directly from integration and checked in on the integration branch.
Pull requests could then be made from the integration branch to a QA branch for deployment upwards, and ultimately to master
for release to production.
Even if you're not using Salesforce DX, you want to ensure that Git is defined as the source of truth for your org, not the contents of the sandboxes. The sandbox content should always be amenable to being replaced with a deployment from one of your main branches.
This is definitely an area where someone on your team needs to have the relevant expertise on managing a Git-based development flow - it's hard to use this kind of architecture operationally by just following instructions.
answered Dec 14 at 15:13
David Reed
29.2k61746
29.2k61746
We want to pull all the metadata components (except - developer.salesforce.com/docs/atlas.en-us.api_meta.meta/…) that can be deployed into the master branch. Since we have limits (You can retrieve and deploy up to 10,000 files or 400 MB) trying to figure out the best process to get all the metadata into master branch, we have huge number of components like classes, triggers, report types, reports, vf pages, lightning components, validation rules, custom objects, fields, email alerts, wfrules, process flows etc. Planning to use sfdx cli mdapi retreive
– ab0369
Dec 14 at 15:19
1
If your org exceeds the limits of the Metadata API, you'll need to batch your retrieves by building multiplepackage.xml
files to extract specific groups of metadata components.
– David Reed
Dec 14 at 15:23
1
I have seen that whenmaster
grows over time as it's the ultimate source of truth, deploying frommaster
leads to an extended deployment time. From the process I usually follow is to deploy from arelease
branch which consists only of what is required for the release (sort of delta deployment approach), instead of overwriting Production with what's not needed. It though drills down to the process being followed varying from place to place.
– Jayant Das
Dec 14 at 15:37
@JayantDas It's also possible to do a delta deploy off a commit range onmaster
via something likegit diff HEAD~1 --name-only --diff-filter=ACM
. Either way, I agree with you that that is preferable most of the time.
– David Reed
Dec 14 at 16:05
1
@ab0369 It will work perfectly fine. I have used this approach in quite a few of my implementations
– Jayant Das
Dec 15 at 14:33
|
show 3 more comments
We want to pull all the metadata components (except - developer.salesforce.com/docs/atlas.en-us.api_meta.meta/…) that can be deployed into the master branch. Since we have limits (You can retrieve and deploy up to 10,000 files or 400 MB) trying to figure out the best process to get all the metadata into master branch, we have huge number of components like classes, triggers, report types, reports, vf pages, lightning components, validation rules, custom objects, fields, email alerts, wfrules, process flows etc. Planning to use sfdx cli mdapi retreive
– ab0369
Dec 14 at 15:19
1
If your org exceeds the limits of the Metadata API, you'll need to batch your retrieves by building multiplepackage.xml
files to extract specific groups of metadata components.
– David Reed
Dec 14 at 15:23
1
I have seen that whenmaster
grows over time as it's the ultimate source of truth, deploying frommaster
leads to an extended deployment time. From the process I usually follow is to deploy from arelease
branch which consists only of what is required for the release (sort of delta deployment approach), instead of overwriting Production with what's not needed. It though drills down to the process being followed varying from place to place.
– Jayant Das
Dec 14 at 15:37
@JayantDas It's also possible to do a delta deploy off a commit range onmaster
via something likegit diff HEAD~1 --name-only --diff-filter=ACM
. Either way, I agree with you that that is preferable most of the time.
– David Reed
Dec 14 at 16:05
1
@ab0369 It will work perfectly fine. I have used this approach in quite a few of my implementations
– Jayant Das
Dec 15 at 14:33
We want to pull all the metadata components (except - developer.salesforce.com/docs/atlas.en-us.api_meta.meta/…) that can be deployed into the master branch. Since we have limits (You can retrieve and deploy up to 10,000 files or 400 MB) trying to figure out the best process to get all the metadata into master branch, we have huge number of components like classes, triggers, report types, reports, vf pages, lightning components, validation rules, custom objects, fields, email alerts, wfrules, process flows etc. Planning to use sfdx cli mdapi retreive
– ab0369
Dec 14 at 15:19
We want to pull all the metadata components (except - developer.salesforce.com/docs/atlas.en-us.api_meta.meta/…) that can be deployed into the master branch. Since we have limits (You can retrieve and deploy up to 10,000 files or 400 MB) trying to figure out the best process to get all the metadata into master branch, we have huge number of components like classes, triggers, report types, reports, vf pages, lightning components, validation rules, custom objects, fields, email alerts, wfrules, process flows etc. Planning to use sfdx cli mdapi retreive
– ab0369
Dec 14 at 15:19
1
1
If your org exceeds the limits of the Metadata API, you'll need to batch your retrieves by building multiple
package.xml
files to extract specific groups of metadata components.– David Reed
Dec 14 at 15:23
If your org exceeds the limits of the Metadata API, you'll need to batch your retrieves by building multiple
package.xml
files to extract specific groups of metadata components.– David Reed
Dec 14 at 15:23
1
1
I have seen that when
master
grows over time as it's the ultimate source of truth, deploying from master
leads to an extended deployment time. From the process I usually follow is to deploy from a release
branch which consists only of what is required for the release (sort of delta deployment approach), instead of overwriting Production with what's not needed. It though drills down to the process being followed varying from place to place.– Jayant Das
Dec 14 at 15:37
I have seen that when
master
grows over time as it's the ultimate source of truth, deploying from master
leads to an extended deployment time. From the process I usually follow is to deploy from a release
branch which consists only of what is required for the release (sort of delta deployment approach), instead of overwriting Production with what's not needed. It though drills down to the process being followed varying from place to place.– Jayant Das
Dec 14 at 15:37
@JayantDas It's also possible to do a delta deploy off a commit range on
master
via something like git diff HEAD~1 --name-only --diff-filter=ACM
. Either way, I agree with you that that is preferable most of the time.– David Reed
Dec 14 at 16:05
@JayantDas It's also possible to do a delta deploy off a commit range on
master
via something like git diff HEAD~1 --name-only --diff-filter=ACM
. Either way, I agree with you that that is preferable most of the time.– David Reed
Dec 14 at 16:05
1
1
@ab0369 It will work perfectly fine. I have used this approach in quite a few of my implementations
– Jayant Das
Dec 15 at 14:33
@ab0369 It will work perfectly fine. I have used this approach in quite a few of my implementations
– Jayant Das
Dec 15 at 14:33
|
show 3 more comments
While I will recommend to move to Salesforce DX, but in the traditional model of release management, your master should always be a replica of what's in Production.
A typical flow that I have followed in the past is:
Create master as the gold copy. This should be your most recent stable copy of running code in Production. You will need to decide which metadata is important for you to version manage. E.g., if you are only looking to version control Apex, Triggers or you are also looking to version control say Roles, Profiles, etc. My approach to this is that what we build, is version controlled.
As for how to extract the metadata, you will need to use the Metadata API to extract it from Production and commit it to the repo. You can use Force.com IDE for this purpose. Take a look at this link for details.
- Create release branch from master. I have usually followed an approach here is to release only what's needed. This is because if you have too many components in the branch (say, copied over from master) which are not modified in the current release, will lead to a longer deployment time. While this depends on the approach if you want to release from master or release branch, but in general, you should only deploy what's required.
- Developers create feature branches from the release branch, build, test and create PRs to merge to the release branch
- Release your code from the release branch to Production
- Merge with master after release
There can be different ways to handle depending on the scale and choice, but typically this approach has worked well for me.
You will definitely need to take care of other aspects, viz., code reviews, code merging, etc. but as long as you can put the process in the place, this will work fine.
This approach also works if your developers are working out of a single sandbox, because every developer will create the feature branch and before it can be merged to the release branch and moved to upper environments, you can implement a review process to make sure the code being committed to the release branch is validated.
We are not ready for DX yet. we want to reach there but first we want to setup git based deployments. How to get all the metadata into a master branch since we have limits around time and number for components ?
– ab0369
Dec 14 at 15:11
I updated with details in the first bullet in my answer.
– Jayant Das
Dec 14 at 15:13
add a comment |
While I will recommend to move to Salesforce DX, but in the traditional model of release management, your master should always be a replica of what's in Production.
A typical flow that I have followed in the past is:
Create master as the gold copy. This should be your most recent stable copy of running code in Production. You will need to decide which metadata is important for you to version manage. E.g., if you are only looking to version control Apex, Triggers or you are also looking to version control say Roles, Profiles, etc. My approach to this is that what we build, is version controlled.
As for how to extract the metadata, you will need to use the Metadata API to extract it from Production and commit it to the repo. You can use Force.com IDE for this purpose. Take a look at this link for details.
- Create release branch from master. I have usually followed an approach here is to release only what's needed. This is because if you have too many components in the branch (say, copied over from master) which are not modified in the current release, will lead to a longer deployment time. While this depends on the approach if you want to release from master or release branch, but in general, you should only deploy what's required.
- Developers create feature branches from the release branch, build, test and create PRs to merge to the release branch
- Release your code from the release branch to Production
- Merge with master after release
There can be different ways to handle depending on the scale and choice, but typically this approach has worked well for me.
You will definitely need to take care of other aspects, viz., code reviews, code merging, etc. but as long as you can put the process in the place, this will work fine.
This approach also works if your developers are working out of a single sandbox, because every developer will create the feature branch and before it can be merged to the release branch and moved to upper environments, you can implement a review process to make sure the code being committed to the release branch is validated.
We are not ready for DX yet. we want to reach there but first we want to setup git based deployments. How to get all the metadata into a master branch since we have limits around time and number for components ?
– ab0369
Dec 14 at 15:11
I updated with details in the first bullet in my answer.
– Jayant Das
Dec 14 at 15:13
add a comment |
While I will recommend to move to Salesforce DX, but in the traditional model of release management, your master should always be a replica of what's in Production.
A typical flow that I have followed in the past is:
Create master as the gold copy. This should be your most recent stable copy of running code in Production. You will need to decide which metadata is important for you to version manage. E.g., if you are only looking to version control Apex, Triggers or you are also looking to version control say Roles, Profiles, etc. My approach to this is that what we build, is version controlled.
As for how to extract the metadata, you will need to use the Metadata API to extract it from Production and commit it to the repo. You can use Force.com IDE for this purpose. Take a look at this link for details.
- Create release branch from master. I have usually followed an approach here is to release only what's needed. This is because if you have too many components in the branch (say, copied over from master) which are not modified in the current release, will lead to a longer deployment time. While this depends on the approach if you want to release from master or release branch, but in general, you should only deploy what's required.
- Developers create feature branches from the release branch, build, test and create PRs to merge to the release branch
- Release your code from the release branch to Production
- Merge with master after release
There can be different ways to handle depending on the scale and choice, but typically this approach has worked well for me.
You will definitely need to take care of other aspects, viz., code reviews, code merging, etc. but as long as you can put the process in the place, this will work fine.
This approach also works if your developers are working out of a single sandbox, because every developer will create the feature branch and before it can be merged to the release branch and moved to upper environments, you can implement a review process to make sure the code being committed to the release branch is validated.
While I will recommend to move to Salesforce DX, but in the traditional model of release management, your master should always be a replica of what's in Production.
A typical flow that I have followed in the past is:
Create master as the gold copy. This should be your most recent stable copy of running code in Production. You will need to decide which metadata is important for you to version manage. E.g., if you are only looking to version control Apex, Triggers or you are also looking to version control say Roles, Profiles, etc. My approach to this is that what we build, is version controlled.
As for how to extract the metadata, you will need to use the Metadata API to extract it from Production and commit it to the repo. You can use Force.com IDE for this purpose. Take a look at this link for details.
- Create release branch from master. I have usually followed an approach here is to release only what's needed. This is because if you have too many components in the branch (say, copied over from master) which are not modified in the current release, will lead to a longer deployment time. While this depends on the approach if you want to release from master or release branch, but in general, you should only deploy what's required.
- Developers create feature branches from the release branch, build, test and create PRs to merge to the release branch
- Release your code from the release branch to Production
- Merge with master after release
There can be different ways to handle depending on the scale and choice, but typically this approach has worked well for me.
You will definitely need to take care of other aspects, viz., code reviews, code merging, etc. but as long as you can put the process in the place, this will work fine.
This approach also works if your developers are working out of a single sandbox, because every developer will create the feature branch and before it can be merged to the release branch and moved to upper environments, you can implement a review process to make sure the code being committed to the release branch is validated.
edited Dec 14 at 15:26
answered Dec 14 at 15:09
Jayant Das
12k2623
12k2623
We are not ready for DX yet. we want to reach there but first we want to setup git based deployments. How to get all the metadata into a master branch since we have limits around time and number for components ?
– ab0369
Dec 14 at 15:11
I updated with details in the first bullet in my answer.
– Jayant Das
Dec 14 at 15:13
add a comment |
We are not ready for DX yet. we want to reach there but first we want to setup git based deployments. How to get all the metadata into a master branch since we have limits around time and number for components ?
– ab0369
Dec 14 at 15:11
I updated with details in the first bullet in my answer.
– Jayant Das
Dec 14 at 15:13
We are not ready for DX yet. we want to reach there but first we want to setup git based deployments. How to get all the metadata into a master branch since we have limits around time and number for components ?
– ab0369
Dec 14 at 15:11
We are not ready for DX yet. we want to reach there but first we want to setup git based deployments. How to get all the metadata into a master branch since we have limits around time and number for components ?
– ab0369
Dec 14 at 15:11
I updated with details in the first bullet in my answer.
– Jayant Das
Dec 14 at 15:13
I updated with details in the first bullet in my answer.
– Jayant Das
Dec 14 at 15:13
add a comment |
Disclaimer: I work for Gearset
I'm a little late to the party but hopefully I can add something of value - we've got a few largely product-agnostic whitepapers for download that might be useful to you, specifically:
Simplifying Salesforce release management: talks about developing a process from a sandbox to sandbox model to something more complex targeted at the maturity and size of your team.
Version control for Salesforce: this is the follow-up to it, and talks about adopting version control incrementally, again moving from sandbox to sandbox, through introducing version control, picking a branching strategy that suits your workflow, etc.
Adopting Salesforce DX: this one's a little more of a thought piece about where DX fits into this process going forwards, and how you can get the most from DX with teams of mixed technical backgrounds.
They get a little product specific at times, but they're largely agnostic - you can follow a similar approach regardless of the tools you choose. You can download all of the above without needing to give us any contact info etc. at:
https://gearset.com/resources#whitepapers
You can also use Gearset to directly compare and deploy to and from git repos in either the DX or Ant-style metadata formats. This is a really easy way to get that initial batch of metadata into your repo. There's a free 30 day trial etc., so you can do it in a few clicks - just pick a source org and a target bitbucket repo and branch, run a comparison, select all components, add a commit message and hit deploy, and your repo will be ready to go.
I agree with Bill that a subset of metadata makes sense as a starting point, but you're unlikely to get absolute consensus on what that subset should be. Gearset's default set of metadata sits at around 64 types at the moment, and we think that's a good starting point for a manual workflow. For a more automated flow that uses CI, we'd start by stripping that down further, getting your process working, then incrementally adding types as you move forward.
In terms of workflow, I'd echo some of the other posters - aim for a 1:1 mapping between feature branch and dev sandbox (or spin up scratch orgs as appropriate if you go the DX route). We then commonly see a staging / integration org that tracks your master branch, before a UAT environment then obviously prod. Master should always contain only code that you're happy to ship, and if it's broken / undeployable at any point then the priority should be to get that back into a shippable state asap. All merges to master should be via pull / merge request, and all deployments to staging / integration should be from master. From there, ideally you shouldn't have any changes being made directly to other environments - they should all flow from master.
If you do need to make changes directly into an environment like integration, I'd probably recommend setting up another environment for those changes, and then set up a job that monitors that environment and lets you pull changes back to a corresponding branch on a regular cadence. You can then follow the gitflow / PR model and merge that branch back into master on review. Gearset has a monitoring feature that'll notify you when an org has changed from its previous state, and push those changes to a branch in git with a couple of clicks, which would come in very handy here, but you can also set something like that up with Jenkins / TeamCity and the Salesforce CLI / Ant, for example - just a little legwork.
This is a fairly interesting topic to me, so if you want to talk about it feel free to open the on-site chat at gearset.com and leave a message - it'll come straight to the dev team and we're always happy to chat Salesforce DevOps :)
Hope that helps!
add a comment |
Disclaimer: I work for Gearset
I'm a little late to the party but hopefully I can add something of value - we've got a few largely product-agnostic whitepapers for download that might be useful to you, specifically:
Simplifying Salesforce release management: talks about developing a process from a sandbox to sandbox model to something more complex targeted at the maturity and size of your team.
Version control for Salesforce: this is the follow-up to it, and talks about adopting version control incrementally, again moving from sandbox to sandbox, through introducing version control, picking a branching strategy that suits your workflow, etc.
Adopting Salesforce DX: this one's a little more of a thought piece about where DX fits into this process going forwards, and how you can get the most from DX with teams of mixed technical backgrounds.
They get a little product specific at times, but they're largely agnostic - you can follow a similar approach regardless of the tools you choose. You can download all of the above without needing to give us any contact info etc. at:
https://gearset.com/resources#whitepapers
You can also use Gearset to directly compare and deploy to and from git repos in either the DX or Ant-style metadata formats. This is a really easy way to get that initial batch of metadata into your repo. There's a free 30 day trial etc., so you can do it in a few clicks - just pick a source org and a target bitbucket repo and branch, run a comparison, select all components, add a commit message and hit deploy, and your repo will be ready to go.
I agree with Bill that a subset of metadata makes sense as a starting point, but you're unlikely to get absolute consensus on what that subset should be. Gearset's default set of metadata sits at around 64 types at the moment, and we think that's a good starting point for a manual workflow. For a more automated flow that uses CI, we'd start by stripping that down further, getting your process working, then incrementally adding types as you move forward.
In terms of workflow, I'd echo some of the other posters - aim for a 1:1 mapping between feature branch and dev sandbox (or spin up scratch orgs as appropriate if you go the DX route). We then commonly see a staging / integration org that tracks your master branch, before a UAT environment then obviously prod. Master should always contain only code that you're happy to ship, and if it's broken / undeployable at any point then the priority should be to get that back into a shippable state asap. All merges to master should be via pull / merge request, and all deployments to staging / integration should be from master. From there, ideally you shouldn't have any changes being made directly to other environments - they should all flow from master.
If you do need to make changes directly into an environment like integration, I'd probably recommend setting up another environment for those changes, and then set up a job that monitors that environment and lets you pull changes back to a corresponding branch on a regular cadence. You can then follow the gitflow / PR model and merge that branch back into master on review. Gearset has a monitoring feature that'll notify you when an org has changed from its previous state, and push those changes to a branch in git with a couple of clicks, which would come in very handy here, but you can also set something like that up with Jenkins / TeamCity and the Salesforce CLI / Ant, for example - just a little legwork.
This is a fairly interesting topic to me, so if you want to talk about it feel free to open the on-site chat at gearset.com and leave a message - it'll come straight to the dev team and we're always happy to chat Salesforce DevOps :)
Hope that helps!
add a comment |
Disclaimer: I work for Gearset
I'm a little late to the party but hopefully I can add something of value - we've got a few largely product-agnostic whitepapers for download that might be useful to you, specifically:
Simplifying Salesforce release management: talks about developing a process from a sandbox to sandbox model to something more complex targeted at the maturity and size of your team.
Version control for Salesforce: this is the follow-up to it, and talks about adopting version control incrementally, again moving from sandbox to sandbox, through introducing version control, picking a branching strategy that suits your workflow, etc.
Adopting Salesforce DX: this one's a little more of a thought piece about where DX fits into this process going forwards, and how you can get the most from DX with teams of mixed technical backgrounds.
They get a little product specific at times, but they're largely agnostic - you can follow a similar approach regardless of the tools you choose. You can download all of the above without needing to give us any contact info etc. at:
https://gearset.com/resources#whitepapers
You can also use Gearset to directly compare and deploy to and from git repos in either the DX or Ant-style metadata formats. This is a really easy way to get that initial batch of metadata into your repo. There's a free 30 day trial etc., so you can do it in a few clicks - just pick a source org and a target bitbucket repo and branch, run a comparison, select all components, add a commit message and hit deploy, and your repo will be ready to go.
I agree with Bill that a subset of metadata makes sense as a starting point, but you're unlikely to get absolute consensus on what that subset should be. Gearset's default set of metadata sits at around 64 types at the moment, and we think that's a good starting point for a manual workflow. For a more automated flow that uses CI, we'd start by stripping that down further, getting your process working, then incrementally adding types as you move forward.
In terms of workflow, I'd echo some of the other posters - aim for a 1:1 mapping between feature branch and dev sandbox (or spin up scratch orgs as appropriate if you go the DX route). We then commonly see a staging / integration org that tracks your master branch, before a UAT environment then obviously prod. Master should always contain only code that you're happy to ship, and if it's broken / undeployable at any point then the priority should be to get that back into a shippable state asap. All merges to master should be via pull / merge request, and all deployments to staging / integration should be from master. From there, ideally you shouldn't have any changes being made directly to other environments - they should all flow from master.
If you do need to make changes directly into an environment like integration, I'd probably recommend setting up another environment for those changes, and then set up a job that monitors that environment and lets you pull changes back to a corresponding branch on a regular cadence. You can then follow the gitflow / PR model and merge that branch back into master on review. Gearset has a monitoring feature that'll notify you when an org has changed from its previous state, and push those changes to a branch in git with a couple of clicks, which would come in very handy here, but you can also set something like that up with Jenkins / TeamCity and the Salesforce CLI / Ant, for example - just a little legwork.
This is a fairly interesting topic to me, so if you want to talk about it feel free to open the on-site chat at gearset.com and leave a message - it'll come straight to the dev team and we're always happy to chat Salesforce DevOps :)
Hope that helps!
Disclaimer: I work for Gearset
I'm a little late to the party but hopefully I can add something of value - we've got a few largely product-agnostic whitepapers for download that might be useful to you, specifically:
Simplifying Salesforce release management: talks about developing a process from a sandbox to sandbox model to something more complex targeted at the maturity and size of your team.
Version control for Salesforce: this is the follow-up to it, and talks about adopting version control incrementally, again moving from sandbox to sandbox, through introducing version control, picking a branching strategy that suits your workflow, etc.
Adopting Salesforce DX: this one's a little more of a thought piece about where DX fits into this process going forwards, and how you can get the most from DX with teams of mixed technical backgrounds.
They get a little product specific at times, but they're largely agnostic - you can follow a similar approach regardless of the tools you choose. You can download all of the above without needing to give us any contact info etc. at:
https://gearset.com/resources#whitepapers
You can also use Gearset to directly compare and deploy to and from git repos in either the DX or Ant-style metadata formats. This is a really easy way to get that initial batch of metadata into your repo. There's a free 30 day trial etc., so you can do it in a few clicks - just pick a source org and a target bitbucket repo and branch, run a comparison, select all components, add a commit message and hit deploy, and your repo will be ready to go.
I agree with Bill that a subset of metadata makes sense as a starting point, but you're unlikely to get absolute consensus on what that subset should be. Gearset's default set of metadata sits at around 64 types at the moment, and we think that's a good starting point for a manual workflow. For a more automated flow that uses CI, we'd start by stripping that down further, getting your process working, then incrementally adding types as you move forward.
In terms of workflow, I'd echo some of the other posters - aim for a 1:1 mapping between feature branch and dev sandbox (or spin up scratch orgs as appropriate if you go the DX route). We then commonly see a staging / integration org that tracks your master branch, before a UAT environment then obviously prod. Master should always contain only code that you're happy to ship, and if it's broken / undeployable at any point then the priority should be to get that back into a shippable state asap. All merges to master should be via pull / merge request, and all deployments to staging / integration should be from master. From there, ideally you shouldn't have any changes being made directly to other environments - they should all flow from master.
If you do need to make changes directly into an environment like integration, I'd probably recommend setting up another environment for those changes, and then set up a job that monitors that environment and lets you pull changes back to a corresponding branch on a regular cadence. You can then follow the gitflow / PR model and merge that branch back into master on review. Gearset has a monitoring feature that'll notify you when an org has changed from its previous state, and push those changes to a branch in git with a couple of clicks, which would come in very handy here, but you can also set something like that up with Jenkins / TeamCity and the Salesforce CLI / Ant, for example - just a little legwork.
This is a fairly interesting topic to me, so if you want to talk about it feel free to open the on-site chat at gearset.com and leave a message - it'll come straight to the dev team and we're always happy to chat Salesforce DevOps :)
Hope that helps!
edited 2 days ago
answered 2 days ago
mpd106
54126
54126
add a comment |
add a comment |
I don't recommend moving all of your Metadata into git. For many orgs, this is not practical, and the result is a gigantic repo. Use git for assets that need code merge -- like Apex Classes, Triggers, and Pages. Some other assets such as Lightning Bundles, Custom Objects, and Tabs are also good candidates for a repo.
But for the other 200 Metadata Types, use a Change and Release Management tool for comparing and merging live Salesforce orgs. Merge your code assets in the repo and move them into a Sandbox for integration testing. Otherwise you will end up with a massive XML comparison and merge challenge. And just because you merged some XML in a repo does NOT mean it can be deployed in an actual org.
New contributor
What metadata do you think is not a good option for repo?
– ab0369
Dec 15 at 21:52
1
Profiles, for example. They are gigantic and very complex. I would use a repo for assets that a human can merge, like Apex.
– Bill Appleton
Dec 16 at 23:35
Understood and will keep this in mind while implementing
– ab0369
Dec 17 at 3:54
1
I've worked on projects that do keep profiles in VCS. It is possible, but it takes a lot of discipline in constructing your commits and keeping environments refreshed. I prefer to VCS everything except profiles.
– David Reed
2 days ago
add a comment |
I don't recommend moving all of your Metadata into git. For many orgs, this is not practical, and the result is a gigantic repo. Use git for assets that need code merge -- like Apex Classes, Triggers, and Pages. Some other assets such as Lightning Bundles, Custom Objects, and Tabs are also good candidates for a repo.
But for the other 200 Metadata Types, use a Change and Release Management tool for comparing and merging live Salesforce orgs. Merge your code assets in the repo and move them into a Sandbox for integration testing. Otherwise you will end up with a massive XML comparison and merge challenge. And just because you merged some XML in a repo does NOT mean it can be deployed in an actual org.
New contributor
What metadata do you think is not a good option for repo?
– ab0369
Dec 15 at 21:52
1
Profiles, for example. They are gigantic and very complex. I would use a repo for assets that a human can merge, like Apex.
– Bill Appleton
Dec 16 at 23:35
Understood and will keep this in mind while implementing
– ab0369
Dec 17 at 3:54
1
I've worked on projects that do keep profiles in VCS. It is possible, but it takes a lot of discipline in constructing your commits and keeping environments refreshed. I prefer to VCS everything except profiles.
– David Reed
2 days ago
add a comment |
I don't recommend moving all of your Metadata into git. For many orgs, this is not practical, and the result is a gigantic repo. Use git for assets that need code merge -- like Apex Classes, Triggers, and Pages. Some other assets such as Lightning Bundles, Custom Objects, and Tabs are also good candidates for a repo.
But for the other 200 Metadata Types, use a Change and Release Management tool for comparing and merging live Salesforce orgs. Merge your code assets in the repo and move them into a Sandbox for integration testing. Otherwise you will end up with a massive XML comparison and merge challenge. And just because you merged some XML in a repo does NOT mean it can be deployed in an actual org.
New contributor
I don't recommend moving all of your Metadata into git. For many orgs, this is not practical, and the result is a gigantic repo. Use git for assets that need code merge -- like Apex Classes, Triggers, and Pages. Some other assets such as Lightning Bundles, Custom Objects, and Tabs are also good candidates for a repo.
But for the other 200 Metadata Types, use a Change and Release Management tool for comparing and merging live Salesforce orgs. Merge your code assets in the repo and move them into a Sandbox for integration testing. Otherwise you will end up with a massive XML comparison and merge challenge. And just because you merged some XML in a repo does NOT mean it can be deployed in an actual org.
New contributor
New contributor
answered Dec 15 at 15:01
Bill Appleton
363
363
New contributor
New contributor
What metadata do you think is not a good option for repo?
– ab0369
Dec 15 at 21:52
1
Profiles, for example. They are gigantic and very complex. I would use a repo for assets that a human can merge, like Apex.
– Bill Appleton
Dec 16 at 23:35
Understood and will keep this in mind while implementing
– ab0369
Dec 17 at 3:54
1
I've worked on projects that do keep profiles in VCS. It is possible, but it takes a lot of discipline in constructing your commits and keeping environments refreshed. I prefer to VCS everything except profiles.
– David Reed
2 days ago
add a comment |
What metadata do you think is not a good option for repo?
– ab0369
Dec 15 at 21:52
1
Profiles, for example. They are gigantic and very complex. I would use a repo for assets that a human can merge, like Apex.
– Bill Appleton
Dec 16 at 23:35
Understood and will keep this in mind while implementing
– ab0369
Dec 17 at 3:54
1
I've worked on projects that do keep profiles in VCS. It is possible, but it takes a lot of discipline in constructing your commits and keeping environments refreshed. I prefer to VCS everything except profiles.
– David Reed
2 days ago
What metadata do you think is not a good option for repo?
– ab0369
Dec 15 at 21:52
What metadata do you think is not a good option for repo?
– ab0369
Dec 15 at 21:52
1
1
Profiles, for example. They are gigantic and very complex. I would use a repo for assets that a human can merge, like Apex.
– Bill Appleton
Dec 16 at 23:35
Profiles, for example. They are gigantic and very complex. I would use a repo for assets that a human can merge, like Apex.
– Bill Appleton
Dec 16 at 23:35
Understood and will keep this in mind while implementing
– ab0369
Dec 17 at 3:54
Understood and will keep this in mind while implementing
– ab0369
Dec 17 at 3:54
1
1
I've worked on projects that do keep profiles in VCS. It is possible, but it takes a lot of discipline in constructing your commits and keeping environments refreshed. I prefer to VCS everything except profiles.
– David Reed
2 days ago
I've worked on projects that do keep profiles in VCS. It is possible, but it takes a lot of discipline in constructing your commits and keeping environments refreshed. I prefer to VCS everything except profiles.
– David Reed
2 days ago
add a comment |
Thanks for contributing an answer to Salesforce Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsalesforce.stackexchange.com%2fquestions%2f242629%2fhow-to-get-all-metadata-into-git-and-what-will-the-optimal-release-flow-using-sa%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown